var/home/core/zuul-output/0000755000175000017500000000000015140132310014513 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140146602015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000326524215140146410020260 0ustar corecorèikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB…2Eڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40tr>PYD'vtk]7>$a;ɾ7lַ;̵3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺b}0w8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZ-D\h]Q!]*!_.f*g,Z0>?<;~9.뙘 vKAb;-$JRPţ*描Լf^`iwoW~wSL2uQO)qai]>yE*,?k 9Z29}}(4ҲIFyG -^W6yY<*uvf d |TRZ;j?| |!I糓 sw`{s0Aȶ9W E%*mG:tëoG(;h0!}qfJz硂Ϧ4Ck9]٣Z%T%x~5r.N`$g`Խ!:*Wni|QXj0NbYe獸]fNdƭwq <ć;_ʧNs9[(=!@Q,}s=LN YlYd'Z;@=o嚲W9ȝQEkT/*BR =v*.h4(^&-Wg̫b]OBEFδW~N 97;Zp0s]UIĀg)4 B^S4t; *퇄u p}duA6_sE炱}r4(9ifhs8u'8KwI~3v4&8[qߎ5.)Q VE JN`:a!KM/+9 bG+މG uIo1]ߔr TGGJ\B BR 4X\r RYGVق?<6jHSJ Jno#ˏl_}z?1:N3cl.:f 3 JJ5Z|&הԟ,Tصp&NI%`t3Vi=Ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ'K0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (7 _:+$ߗv{wzM$VbήdsOw<}#b[E7imH'Y`;5{$ь'gISzp; AQvDIyHc<槔w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF \r!٤ã_e=P1W3c P2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==ck vz(vb$^Nyo$p[DtUCE9sBz%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|y,F9$^@mdH֙toN1 < ҷBq/)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэoXf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R0W/J%эpgu#QH! ,/3`~eB|C1YgW~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶMn)ۿ!/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$?mI_K(l ldoxGd &ٔ5~HJ|bbŌ[g=U]]]Gw`$ Dxu%!HcQw=Q"CUT @`6E**IHg~js 0.<"*D!TQET]iCS1TE)E<ΖOHaQ«ڜ7[]YYq{\~)ϣB6Id S9p!DH؃`J~ NՌ1Te>܍?TUX3-+1l3in7n=/utU"ᩇ?CVoІ$DElI5L!py@ fg;qv=T D3 ?.)v]Rvb@ɮwCM񥩮kroV^L.L-ŕkF(ln k;⊙0B.|557<6,\Y`y`y`1#~: 7uBkUͱdEɳ8冖ګcƷu| sz]4e& ?#u 6A3 왧wxb^hgw'U^d{En.}ɘ3W7^ngn(t&^Nn񧅁{6Rd1CZk 5C| 4I_SS@jxpzkeh*cqU&8=GYPT}D'疣]98Ǻ~u3Us זK4f~/A "t<>2 ҙm%B'@zmV_>`}`ybx?2um_"'_*tW^ro1"_(> ˏ \V^L&T%A _'x]5Xr uan^naRq^1/ AX\ϢG%"euΫa2ޖ -#/xt|~  o?<)< fZ“OHȣ/ِT77TUsx5SJ . V[AD 2hDFy%bJθ[Wݟw4kBQ/g0#cߣ:ŕy6}4B.bҷH\7F'bp)PLUʻ$=+?x:A1|z3wQf1h0 D"b" ¯Z]a,4Jg \TYq5D=!\`^:iw,IjLOj'UXOB44gY h`7Q KL&Y]w**0dYxDXHPL}eOEVӔ簬HN?6PD0D-3QO& 1UUFm}JŢ3'qV'`"_`~%c|\5zN4{}]N&Q{=fix9 Gc">K;itMiZFyաxwqJaBw'*JEBCI"Ռo e $K.$$/ks8" _>\|,l>+i l'MOk/U k dNYF`daiO!K'AactϹCtn%EMSL1O/h8 BhBs, L긊4՜86!ygV\Š(M8}lChr^W8': g{Z: ۋh%vߌ8m`u-ucVG7{[ʼnQm_>ѨyoljBqOeߌif1Xߣ>v(<0?>RGjjq]@Ӓ7U Fw` 4KIV <>°FQQKI)I+>\Q(#M%z3|poʮD\ oeWG!K }JӠZҐIVtS%]|͗ͦk"q9ķ_SIޚ碩ӻxfE!8X9% G Xn'DY2w]F%R+q$*c5Gb?[eN[]E4ڮDG=S&:>j4MTu'N |-+{X4M])[M'R3eRV*Lʻ8djZ8*AJ4fH,qT6=aq`ڏuAkuvg_x:c"*+ i %̤ѵ-aEf5nɁ'~G"] w<e}+O);r1g7(k??,ouA( (j 'm=ry|!]]ߣdͮhiuYI92]8Qܺd.W+ PzP5]wv`B)s4}}̅c >u/ёmaY %غ`CV=ݿ&2-[sECbE7]}; vb w:V.{quLo?~_Ti2csP8޴J4KB`;9☞%Ryw9 EC湂%ƌIfWPRD0@Yqˡ[-Ru`dE{2R\ g-x N@B/a{Σvגk,1yË#lJȼEip[ti4►q<Žn8D`2Qeֆ1`Zf2/U{e9>;LZYUFׇycTlx-g ̣\0oY NCK4*ۑp]oUt].zM1Ҩjš-*HĤxHV$<7 x#ΟTekcd㍻@̫i;#X6եdT 3gI2\WV{*L7ET*2GsyZ*Q݄&ӆiB궆%ev`J]ǽ+[ 㺜ofI$jɉb]oR!,mh'dy5#fB^@7U<#y6U87YrEFQHڳpۊ|웵 x˖"yp)$iuSg2iR"&Z1Sww"}[ɛGC)!3e%V;QS Өz{d8PVhiSVxpmk@D!S-d^Z36m^=y)EMqr AmS'o5etSJ\}YX- @K}b1s[VrcoK8k_+Yyh2dolhLF3Pl(:x%6]Kjv$s6C.lyZuΒexo5_6R4;s1\YcUḭX,s}@4\װs7 ͜0p OckrKcǘojA[Gu,"Yr^BEK"!]b^,>o]ўr]br.iM/B0g s^TC,nM1W̑(#S1v꧎0}'j|Ǻ&oD>(nTϢ7z\Y6n?2DlN7C 筃bbkuwLSki,iơX:i+zj4Msߧa@2'}צL3AǶny1L30;4=,իiOۧiv|=@A`HoM!gf~ o}t :(Z0]g/qC@jOtBϻX;_1bߎ>wL'qnڇʂ鄦v̙NWSaJ6u]`c4֭;/OZ*K_ߪdӠ81a43t=RC1B3[pHVUN odqRTnmK"̐r8~i4s^vySY9(N%?O7/_Wľ\>>uU:~QD'4c&fx coɸXG=N+%c  \#|kJ:?ZU9Zo7TʆIT&<[P)= "|^f#:DU]/B26eԽMj'^|Z.t`ӁNt+smI\%պ ߽>7ON_O_rpۓ-UotƖcMwǮb?fq F8ť}x?=:F8=gØPz0>땃"=1 ѮxD½lGp3pDMubN F⅐ܾM"(onJCG4}1 mW 脾t`~J/55ŻC,OEZ? f8Qa[`/#}%@4#A"? D q 0<'.'Qr&I=wA^ՙiWv҅P^ !3>%A@ BP{ 1Pg iփ~D B AO-dG F|wp{Ј5έ)F"Ϧ8rxO" =?l|ݯÐa<Јz=s@ 4:v8zy8p(FhR 42h}uϮq^ hwnhwhwewtNſQ84s[ZW?c4n7鉿@=[g ?{B|GLi3N>r7u©.KN?PnFЯcN5<ϻ0}T#F9' IwOG#I30/nz} sz@ȩ߇.V}Sb@҃`@(g ovNr3p{͊3z6ۯQރL80^I=EBҐ"EWoaT͡vo`\D#͸4#E4D8XY,f+AeE`!]#3H^(0x^ED_yRZ[RG ~Jo,J2.)1Q9da v85Xp܌%Y:54ܻmQ^cn?atBϒ]nh5jT^M)=EX]0Y!XaFLm8R380E+!oy-bh3C,7tYC!%iL@W˸&eDn١c pg0M\xy\ӕ˲6Kdψ \>4DܰD/8KTibIq _rU2uǨ,Ǟc`(T-E966@7}畘KgD(Mt`zfx)Us IewߨnAjl{`KԣJn"@nB#Q=9??ztܣ[#I1#;"?zY%'翧yRG/ƒ v1=D;6lKm Ť΄z)8-^#yq:-[w,:q{êVG|\wً D|Iq%,Ы..<O5 ]7y68ə(C+ls7I F&sS%A7hxV05x֥f0饮Z#ۉG4H͋ߏYj&bٖ>zcl\Ӏ5 SFZBmKi G `Xoˡ Hk8s& fq>,̺N_#~({٘9FNTVpCBa@m.;V#1g{}CwY>26p9{dz쁦>ާh&dC3ãz,M6EbxX~_Ԑ0/r' q6F~~$h] 4faJpD5_ԵPp4EaP&P0]7dZYgY{VMzY(m,:A) u,BiXDҵNWKӍQ}4X:ÔJi{ci=(=Rz~EW\M (xɜV*$@{ʬJyDrO1ytGX=J5seUd |+Di XwDiy^KIIHϠ\Heqӣ.s_?{QGR?y (`9<4l(hCAOWO(#%PJ(݂P=t7B ֥t>OTcҧv+ݪ_;v}JڷOmzR˶ mO+ۍTDJګ[joO Y%قPg{BuH*FDB-嫄-w#?P*zFDB-W ߞP7B'oAhJh<`K9GXY[HҸ@͟&' [uES$W| 63AP&(:BLrͯŜėZpO!`ڏ1I=;R% jh`w|&:VH~C3ۤ3LyA: .g_pQ廪yhN^"2R&*8yàl1ȟ'GHrޅ>NoPD<E30J ge74m??ϫ4ϓ2+пA+ ;* E`W.7 i`)Sff3̷1Y¥?uNй:{ C4Am  4Q N]Dz;o4:PTշ`PM8=JzyA y~Hxk[yhyݻ3_FhA|ޝ⬭ǖۧ .56Ķ9|E֝s`/q@Ma0w&%-m:9:/qQ)Cȇ "|Ymdg hYKgW4-S0*Zܛ4/?qe%*&wl<8|UرMV(G_H6bpBӯ(NzV֞qi<,eO ZIRA "@ ;!#f$tDŽ |3`+# D w"eh'g@̄lJ CHg=&o [ YHNzM3lV\4qLpՙ#x{_R; :뜭8U'j<.θQ$$)Z`,ea!3J ger.E)Qjt;8CezНd7Y8k:J5t.*:nrS.AY"ܒ5w1&7*渭k&y}@[۰`(ј3Ty,rtim` #ԠJd )XƄ9#PDтVh29K/2"DQ;mb00A}&Br0J7 5+tx1[y`TՂ إ8Tfk,%{O샾"tUTTtlD'.Rde6-XQ%.L!72HjfMER[imd,AL`= qMu:'[ $9K%) O) G'`xE$*^üe׺sClR3({ Ž[pVMDL:X T>ޤU) )Ϳ؊YJѵެo[6lzTp}k_!$e:+)&mDWE57Ů[ӆ+=0CTi\j遀G ƓUqq:IqDeqzQc=,C,2{ )F,}]hJ-HMmHhtdӜjs(jcͣGMO 7~k1a5AjoK]tW7;}vqRažOh xjW;DjNL<YwhkAgV:e%Sgo [>˘8TY&hLma<?x(-lbccg;\9$7WigG >[`t)tjXQHJPgQ8X vͰМ |P}\uNn6#dxuFL7C&uģGNءeowP䊅B`mKމԫ558/*XH/ݒ\BLיaā8|?XcL5e[ԗKʁ#= ir!-7I"l5Zjn?%PGk6^J!N"74*2kDR>Cm K ,GEK f@EdHLO2LS4;JazI#2޻&a&s)Pꊳ2Ci#mJh`aղʊ#K[SE Fz+6~s${t[û9FM]T|9/οl:B7/oܘPf]︘j#>zqb9j(#4a֧<'oW\u#]i^t"BvǒqDcsq$g0ztRLeuR;12C֤Em{)" zsGdg,&tѲ@ã(>Θ ]}bQDלzƬ;( ~WQO&;ѡ$}đA"h{3& MC@esŨ1 HǻCT ^QL1 pV R(X"~*O_蝟˜#o7Sӳ@ˏ m.+lR" 1F.΂`7Uv>>RHM ukt% YZXx&H{/03.:lAtLԪEE ܽ|aъ*q$T?ς37#@dꤟ[6ڢ QI`y-ֵS<=|BJӦHxڥjXE˜ &$k }=kIp2 ykR Ҕʁ3 m֍oFj71Ge4>M 0 VX'%NyQ$A{|wX|Q)u#&dͤ㘅o[:و:Hp(,ɹ#䨳VDjcC%@90ɰVxLiqS 4DbsGJJig EdY2EE˞%%1\c5֪TLc)ТV|dC [g&5i2GyhjbvD MnL"V.pT&>XpTԘApR(b* pK߮inK+ME5g$E5[CԲ>Cn(jdTY;q" xXsrn@"z05bFJIHcB$Ƹٹq3Bv)piLP1FFg5#H8oj}>0ݺsG,/X#`Έ]p7U?"tk)fՈ: )b`|' rj]B%U5Qap&lT6{5GvGtANoG'D;ٛ%~@ :F(&@hBW.:E(b~l l#r4zE9,TE\EkC1L³J*33тh> q~7->>`RxG>dخc{_*r'ZW[ %3nqFw(- 5Z ٦f-KFtlUÂv(ycwjPl 8E4554c´$Z3k@>1_uBo]1~xzm3R-).Nkc]F$ZU\>_֛]\s)j}O.CXb 鲙%+ד)tvUvZGX^;(c(c< G FTcIAy\=?oYpGZr>,HƠH{e"d0ǨFO^<\t.y*FlDrM많%WMz1Fęgd|FOYHTmQ8k'ˈsĒHb]QßKTKTW'Ib|rn, Fi hcǩ Tid7Hޣ&l07Uv-sZ-~*_l'ϑ]|Wl|,8PȔuvAwPTS%YFh%Hl/uG7?ڙЏ8?oYpHlaPdF=~{9v@CK8xZ=r߾@Mu4%rV n֫T=|M&` `ʒ3t`= %jI$.&U}Y88x-QIԧiog>]Hypܐ#bgViMLs_p C:Ⱟ{*&yk)?m1aL-=jt4!kEGc 'bXu^`AeojoQOl}/,0ZO' N+kG?S&mۼOQkY1{o#õoNqڊكWbb(^4eqkR]o"K.qR1l= gd)u$ҤQ$]("؎8-,Ź"G?X/,8jySEL!B}:Yy i`4GƐ|WǂQF*k&׹uGHx_&P~]wG|bis/,4nL#it& JM*^s=I'n׳|UZf + M#\ʾZMO`̱9g;yEpHйsO,8Nf4u {юTré)ǵQ>ܰOmG7F;~N+KNz{1w9M>3zmd 53&G;')Ss̏6?/eĂ8GIQ2ZΨ&AS4zLP~ MyE݁@ nژcL2`]iREA O9PN@ L` [$jR$]nJWblW?Y?]SnGY49esC9?TRgsH#ApRR&oCfǴeC)',f4vI/"R$]6>3 E;)q)]tp1J򸂼zLg΂^# v7v sY'hoOo#i(#5?=sąN}S*P Gw塳8y:X3v nٱMq4+kJKb˿aqr e>,ZʸE͆Bwnq5{r&iks{1ja7BNr#;59*vxAE?Ro/[)ֳ%$Q _ЩޤzLTL`V :;"!Iǡ'Hk\]bArW6/ns;5{FdQM`{EcelL(>GY覈s%;kb[3sM{PpS?GLaSwzΕg;c㚇ݮ(My$ō`4tZU<.{6_GhE1Eɋ2tkfј<\&˲9z  t )mQcN{i@ۄ@y9(e ^8m)*AeVqh&9>x_< &ӑ/;3t-G+HB Z^N[thYIq;嵳=7U;|g E7!dZҘvEo4ϫ:S N^PovFh6+Sܠ,5z^=I&w8OU88(X x}:@HIAZ>ӻhG}#9.+m_ `l,>A6609 YxX%[%nTVm|w\<ooBUV.^GEŬo%𳢨 WU%!dwͺ^f}?==Q/2e4NQNY~הʺ_JI%v{bV 4x/W0 a$Kp\YqTe/KbccO2e(._%XKpT)~uNY=|&0 !nzoFW/Yޘ|#r%ΛPUŶee@5H䛋_'PTޝuA#HtDžeQmc7y7~};jٿx g0 '7?;7|3{ŏoU7]/<>aOI„Imq!&x*=Ⱥi6yGi0ɻ5EbHTMy/h\Ƿy4:a#v.r=_Qw}§Smll| lp2W ?C^`$c{PDw*R݋F~|G`Г7H`i 1W 1 Mt]<2^?J6ׇ Udшa(MӸ]U%GF}3#/5YY^#O30El Z &WS=Ɗv2EphxSH1S䭏Zz2bʫsOJ.+I&JlqIKPiə3GMFbFTjg8fF\q^\25So. r/@T+?j֌:负S;-/ kDz*8F ?p>F]Zn.Mz;»MMf "NUVڕ)Vş Ǜ3TE?Q]%Oz،9p|~IJe1?ggx \ga\r௃hht'1PjBOFu"0N˕o3v;g)Nx&Ja͸̀ꙩ7̀Z*uW‘DEҤ? bA MMRxS81(KuMF"XZJK%=Z-*۲4hB!*U-RQU! ]Nb&hŜhaJZD-Cm}$_F8Zs*ڥdہݜRGPBhzB+D2eqb`E &cنIjWnЂM:vL$B4XiҦi(=i4ji Z$ ]3\ʧ`| Zh7OY>οM>g~2򱔥S?i UX@X;pRt{ZGXj90S`LiP"@ĖK;rxXe߻dL ǎ}$_25)6~_||NN@;gtDa[A̚eSO;-k56ǵvǖxFs`DiZJapʬVJrl0aY#O2aC+@X }9u=DRN28ud{T;PYO⿓2(;-̬ɗ!seZz̕KJCKD* 3ꐳHsJMKJԗo'c%ʃA`jD$".' ĉ U,iy*+^kLdF'x b Z~A,c  [ņՆ_㱝n rL\iTl+:xOjy~aIrhM${F[P KŘ?ye36%x~S0hΈ1n*f$X6R+7@ˆ'~N)]tƘ^^uqI1G(Nr!ӊ+iŢl!l0" tY X M#@Rӹ.G;Yoǥܕy іxm#Zn "mMV+_ld?5*Aװa?0V!jFn#U59R+?(;<4[@r[(1>- }&`WW #2\Vx3@u5r+(81!Õ7k xe=NTD 'JU lc3JC=Hy rA[_F!,˖~ zVDg^zQ^qb?}PA)3҃3{elIDNzVL8pRC<\Lz^yuiG]6uX 8. Ef8s;__28Fd,Ja$HLB.̱w1wf9bLJr2V5/ q8:cYnlN26ܩxLpCu) *-4aTV ?P- ѰfѰ%ݼd}4u!&lP$*kK8!{ 3CFsg>R&U2um?4=ExIUV?B/[w!}$vmoH~v8B_'3>˗XЧ!\bu aĪlSFel>4 |n%lk = Gb,Jk~|Za7^u9 ~zRHBmru#JjPxv\1_k?F?uD(EcbUo8?nje%ԢpRƈ˛٫6+',OCq a =N{&䗟|qCl1}`|dtb]D[H2=ƈn`o/~|78M].IO}tU-]|ۉ*<>P/7flwPiW"<v4H^"6 -]$ FaEІ᫹UN#l*DrB)f,QZ9N)U0JĮ7+n RRLaRA҃s|g+"i.As {tCj;̻q xFQ(s%xDBWAUPAzwc'~ M V"zpGe;6wx1bsv:bەmw'|>&$SL./ĬG5%&bu|W.¥K~*4M\@20˨yV5F&81f Z rVlVv7j3mrͼn"`M}`o0 q]h͘iv +Vcft".vkxE-[| #"0nr@Fͷtu^a8 ) :TYj+M Ԓ#F'k<|P֪y}&NP}'­w6yz;i9[OERDd9ji+ HŜ^Nh;D)kk)ƥǔk"bGk) pP1 S\N2oOHoO4EZL[ДӍ= ֘=͈Fv F1ޯ4qHa^-n>nYG1yU{q#e 72п1DYZCZw b F=&.'J*zv~̐RH+Jf%X}ApӸʥU*ZlLYF!$KKBWpt"@538a*~j={ 둻\q#jlz 0E45#mD0$S4;4T ARB.  {ӫ^zMLnl *pB$&`i ObA! >8/mF HS b!Mc (%(%N{ a @ |D/jӫ",\emx@Z;@L1)DS 8՜a5 M.@adہ`"RmK k".*ؔFrnj"yp9! D/zvOc&G؇:ƴq5cj85zT "Y%)DAuc'HCZT [78TQ pR "ebAT=޵q#" v!nq#Y',vZ4*vKjiF3zՉacffbUUT1B>0,ĚjT^S.J1o`:3 L_ =i|܅L$4{-oM TOaDo*HI J sqD-Ziyee o!A 3Mli@v"-:RK)t2ɉ㵺O o'?x`m5 iMHfLkdgչm9b(3P$ƪM@'IcZ2aT18LAKC?LpFbA k'`vRXSՍ]W. /y>d:5"=hN7e\հqXGTG@ 5E; n,g{ƞAzM[ByV[,|"Ű'hrWñڽQETg/xp1LY7'jVSQeZS~'jt#dXLYίoڣ(+\7G9; #Ī$nn>.ZRqE§RvZc" ʍ"Ja(5D)IT%5 QE5WJ;o-qTz0sւ)hLٜ|zMӆo=aZ V֪Ek !-b_kZE\*>F+ըWv6QU=.8B+}i-GrՃ)H`PiҼk`MXm:z=nMUL(JʼnCn`Տ[%[$5 L0 کC<򠍥T멘h.ƒՀnM{f+3µ*/Ux_ʾrA$Sd^ߨd6(LI,o\Lm)kTEZhAi&nPa?]늛N ypLps}kI4;]b-Z mhAV7jV'ӳ'Ϸ #KM D6 3xyS<)xl1meYMY":&$n.897i#n{qkb:8صl "S dKP;&q2:^^:8qQS@}쎪(Yk |EJ5P@QT v? RPUʝb̜A쬯ddcHv'As因ϩX xFJNV7bWLNV󋵰B=l =+Vq.osw+yiʮߙ|oG|~r !}†ͬV uWKpP"H 4.þ yvu0vf~Jn/I+O]bry ?s()%!ך5d`,fe- ^Nu\F *5x ނQԟ_h15rngT+堡quOΞ^@'=GʆPւimkQlx;&\lh{J9:;B(Kk*J\Pu)+偶l_gOF6_:.E B)m׋[SUh./U =BE: kKv ?Gm"__𿋟#7}ss1ȧ_|gVڏ I岏g]VB_T 0ͯ*^='/VPK?%h:~Q[43MKV] =n4خL_ip1yg`Fnx5~6YFoAAa*&W~: zZNVX={10z9eEJ3>`k7Vvn+7vT}͏Ebg#Pyo{>^| ]\^@·74LBlOy{bd ď=HQߍ'd`Tw]0 ] 7wP, ~Y|J`/xO.]m|a+gblv3q<`oQHe6fi9ʗPr*Y k/{Ġ7CwF4GC^!u`vk*Vl=a+Z)'u%iziuNIȶ:?=ܙ|N x9|G;S0տ z+2"-xf;pHwc`n5V5GJ|$q9u]KF `<pcQ$.57zј`~u rr =`ciםWQ132a#maXc ́dzd*cͦX>>uvpjx>ɑ[Z 6 >k[,B=}M{v~OXtܭ`3ž;,¤ޟ}[ Eh2dY0+Aqf T MW"+,;J/(I">t:ػg\0ÉKo5hzoά:-f!msv FNK30MY.ݳlV՝ y{4[VI@u+_d4cpjvUO0nE7|ʃ+[ՔqܜhnQb"7=A4c8OdQK;ݜ[0`*%P& _\.:+q հ3Oh\L\>S1TPA[ژ?AV:>[l3М}baü ;ZCuJEKe-@3zZ0i"XQ[`-&RÐ60? 0RZ^z6_@>TX!'\ǔf/nهʠ宼[{P'XKT\d.EbW1Rz?0%:- Ak<`:\箍&ؙX"-a Jr5Zug&v/AZ)s%OiWd)(V8F}8u#lLn-l/ys_ޏA}UCO&WU7§eڻ`r[iOuj6J^-bke7̨?|ZfF;If@oZ?lld{@!d-'L%|}TeJpPzZ02.MMe6:PJ ^_.\`LvQHjO#^/F.] 7`4x!k,֬=sњ9k!'Fr26BIr0(D_Nǰȇ">־dӏF0`ǡzP:#Bd7<[YgQlq~*++W Kg;Mpú_6h؀8I/E-Xi'#yqV@`\z*^r7vn' *Q7/p}}CC|k{ aPocGrJ!ܓh (x߽{W0xЏ% dP 9+|[ Re NJ_n1;O(5POs37QZqoUz*™ B* r`/;ZgZ6Ynɫa$j3WU7YWݝ+&ܪpk)JpwtVXWyz!0YUWoQzX-ψF P>L{옇8 xq/&s|d0@b0edp}kQ6.ͮY0Lԃ'i""+'W4V2UQS<2Fc9ly3?d}[h=)^~HoK):bi+I# &uDDr4AiT4\{oH[HP9Il)skLjE R*RM}QT2OXjr'>% 0!2Rh ̢[I\J"#*Ob@1j}ģ$2FbuJyPFrgetN@Zí.#Ǔh1RIĸҩ$-JkZXpZZ%/=!Q&3 cؿٻ64W?_av$>L64 EjVDK&_U}sToƬ:1ZsS\%Xq36[>;;<ÙW὾^(+y<[8Q0cm7HZ 4JHx&1V 2 Cߧw^S@# Vgke ltb~\=c[oHu4=I}X+gФMfκHZ5n}GdI6.F)N X 9hoQe%VeD)js3FU Uc'g&}-wc乮MwE)GYG(}-"_ HM, YA-^i,31#jka$`Y* |!>]RrabٳEq A"cF1 0jrƫ@   6TyE0EqE&@+/uZ)4VT9xm5 (_x@>0`-ĩ搦 lgdFN 噴UAŒ5P|UhrJ vy9bǶwyV̀j@o]Q?B2.@Ơ) yjb4VH,@:Z$R_7UUN%QKUDE41:b‘J{q:2YWҥ`ruA"X L͇ɠlwkLm]LՐZE}m6F82=nbTnBl \*@QR"d2[{؍ihv)0ְc[8 9@g@!t9hk Q (#kFV8-5* ΃`'bYcvXT5I8KMF|d)e$bzc"Õ,-Ƃy7* s.}p^8,*VJXO'R% fcI;ib,o3,4LkTRp0U U-eVۄ oT[ 3$]$yxt*2 cmaHu+m0Zi̽gvE;p5iyu2m;Vlsi$h @][tqtU  G6T0 0{FŨպ[Cm58EMsp ś !u>fTs\@7<"dskh+G JTF˄Jԃ (0 DUZ# RЃb52[66I#+dOU.X2Q@ )r8eۆsAgH`~"i0?li3φbU~f" RLYY 4EL ?#!2 .P,9vQ@-0 U&y]p]ꊄީEF8KTvիma^W{qY뭀yrz񆮭thDHW{3և,Ej:U"h Znb?.ݛMdfc2s=VM8Z,&_LV_/pύ5O?~pϯ>Ufel~:bSzbV._~ԭs:*U@v&3on 0›94`lӻno!)Tw<wҋmYzUԻ!i<^K5p'yQvm<( RmK~kڔ). WӖ&wӳ3m.StҪ0;|gy!I;osͷy9yH(zK8J(> kQm'gWJwQGl9JD[!.k6OGIxeޑ~aVRNo:r-ԜR O;<@9U? gk [lqeNS^AI`O%`rur@dToaZ.3g ۽w|oSqzM)7&; mSgf DSPP0˽xZ?Sr1ԩb&f[f m^OpG[L{qtIt6@ ^&Ljy?&0iԵFww¡YҟrR$1)N򫴻m2+?lu|Iqz ejbW pԏ#۟?54Nsa>/p`O|1<'(=ؖ2s^<f58=;c}b:?_:_irk~l& ӟ(_(:fgCsр-O\}V>MS1mmΧ+m١iq1=!eC']fAIsluD|rЯ|wݱuk{YqvvӞo>[#ܿJ>ڏW}1L7mY0ԘW?,f?cw}pv{ayz:/`Rlq)y+rlW{>[w_r5P攱i7vS~w@m4g2V^q;o`x*cGldiZ{,Y?,sldQ^o"hDЈA#FibH.)zGEmf*zy+ ߦJ~y>ɢN`dvyl=<I4Iv>8)t ]lhyǎ]dc!;uG{$ Od !taD8`BBzfvI=ևe>[ŔPTW[;V=>1YMߴT~ n޿o?mr& juQMB66|.}our& (+Iv:폒T*x^dY傘vpFK5_~q_0I{|0˶7`f9nmޣH@bF)H P0ޙ|}qӉPEi{O..j$(xƛݎւƥg呭[b7!#_ _8*U+ځR3 ?3a ]Ff-93+ w.a<%˞<~kJG7`++=Bk M 9r&7-Onݟ1L6O!=&v'*v3휖Ŷ:u8VkqXeO1818FNy7ΎGs俧Z(j$!e-VQ_5­DzH : ݟ"v%߂cXiO7nugPI}&Yv ęVYOa0')mV~xwssw[A90jA~ŕn7_Ѯ$}*sz~ZmrGv{CVχzZ9]-߳ ʒ1b+CP9v`޼?}@w1f˴ }=nSyڡﵢ.;)QRDHBH땔Y*$rj-q\5N+ظ/]ouXD&eQES!7|(ޒԡz6J8.l"ʔHVf¢c09 Zȇq{]i{/X秈Ҁ[eRl4Q 2vcx Ǒ1&{??ܪnR) _P: 0D+äi;1,Zh-l*5~+`Q\ Ek T2F:d hA# yؐ$?},Qg&wth GhC" vA>Ws͒H+4#A2J,se0=Ʒ2<ڐȇBx_-R 2, pXԝA 8- ӒE紂%W&URz%a8I qPĥl'yOWE$+KGBR88|6%nI-qC!r\}W[LXK&B{~D %tEa9DD#"( [bX,6VZ3asQeܶ"gpi#՚-J |g8=6$A8d mΞ2o/?|u`ȇdڈ߭Q!s|_7z/ߝ Q "hKrX֎`ۈ6|(Ά^ݬbj=X=a*4H2߁y2|(ȎMwlf¸NQF`6bD=̘[0¬Gtcn|^c)GN^T,+gE,1'ӊ'-0@i_ȇ.,颐,1R 99VQN pCGY9^8F*r֚*P%ʑ_Ik"B.1ϷƑڸ\qFRH$XK0v9ـy8:3Seld6p"X2p {1„aBG>qg@g 9F R7! a L8 `g9?mpb{Sͮxkȇb̜Bv3kѳ9PA`~QfNjV`ap( \:w3H';;P1@őՎ+nTF鑜q8g8^a;rĽ!nPCa)%v `g0.sRYs`ӎ{Xq'ִ#EG-jEB vLeW/P?^~C>Z7b>'|yZùtF43T@ IbJJV!{g4N P쥝\=–/F> E@tvbS3x=r?b[?k U$s0a71V$W1A8Gi+z<[|(|0csT6&5$MX<1VPxw@QUOFmRMZ?1 P4t QK9+̎oUqU2+_){'6Ji"'J,ijJ,@|0Og~c~Idm8 IKS:u"2T$v*|(-I:bsE7/un/9>ǵBwMgdU#!cyN06zW}p{e}8ˉa./?x3~ry}K03lȆjِmm"fd/ckbEKq~7Xn1T+,6wSix]i"dO/bp"N2xD cHpђ*g%WLhbr|(b<^]` θ.,Ҩﷺ!rp Lh~]bwey߁w5ɏ~_Ji?ˠ(juxXāOVqvw{[)u^nLSoV\ ޴޴f>xt~n]mpMo/v0;ռ|Kzn 5 ,e\/a *nڻ].~i~5u3x'`^svU.嗟/CS/Jn|w̅!VŗeebU߰{Pv$f29u e?7hO8r:<^]/lYrjϱ>KڨCեww$ۧop?\lw_|*/OWSj[ᯑwVnc-qj)bmQ7Ti0w5Sg ɫ҈tٷ(SD;!5!܁֮Wau1Cq.bB zX_-&ͫN)*mT"R^HV!Hze;M=r5hkk :FohM.%5( Oѳ;v5ZS4(5DLѳ5ZSO~v~7Ѩ2j:Ϲm1AyU2z48Ae0eg)3nSyw(ƫ?x"BNm+1s ǀ)lNuo(EܥĎxu +ЖgM'F^O1_-,Oلb6Hñx jJ~bGF(.=v.+s6>ty9W"/x>_-PpRe~XK󵄨pe^:` yHy Iy |解K&PϔL̮@m˅NaxυF541~r7ߍӍJ$hʄ vo߇ҚiSIiU)!c?eC 1!#v]mmP^UNeQF.g\95xtM]tgL]* չORTgۮ=ⴥF#ov NǴoLo1S q$Q$rav>!n(ߋ]x Hd:̠F#=e>iHd'K/!wދ2͍PŸc1w#jj>wrZmeFlH(g;3ާz$rշ}GTo7\_~;|(;,RH$bkI$)$|'cd !jg ȇ:z)TDMpf˙,^~cLqpa{%ԐozX?fM̳cM)9UJ8T֖䴈&m"/maI^jMe@lww}4O os>R}15u!c# jV3TM,^W.MCƌ97 f YiU+"f)<G kGwvO10ĝ or[%g/cQ3KU$(HR$'noe_qߍ!-6:593=ꔀ9+ROw&XXߨ2f;\@>e#GA#M5NƛDͺi*u:8a ر'6@̼/tܐf+cax>"HQ7[tR<X2R"hJ˜(GCmPQCA#3{j*oAe-"<5 BtF'LF==mE>r;agɳCw1w!e}HCys-$Ƭ]TiPFRKYPJި$4dsVt_s%򁟉IDՉ|!\K9A&<*6WqW wVL`ֲGnx0\ꑦV|IDZ܌hu^/|,KȌ{^sc'[ )CƬ7UA??-.w܌!~{+zD;b;ZziwȘq bnGLD8#kt ;qǻlǻ^e[6sOϸfޫ>+5erso؁!}vv{rJϹLލx7>42v܍8d/V%gNOF>~z R1ucp"[$ ?-Rp`.F]E?__Իybl~7p9;華׏=rRYI$A{JAW ;dS^q!&tGN8~1"8?}Dul56ny;ad4C#771&F2E(*T!c5X=~N Uv*x?6AtY lZtȘqkAGLd=;`DݹLafQ#G$m̾vDm\UJOT*%9Dw [ 9]3' D,t _TƆzC{14 >>է;Lq Q{t@`_4ﶬbRjVtތoi egfa c3f:TS' )hb"0q{oQo+r3j*},3F3Tx֪2@zus jP/ Dx8c2U7VhݥkcLLƠz=+Neӹ]g>>bgy|BX2leN#xE8ŐaS< !1CVy խMXWcVG#.>W 8j1C˙1'o(HF,8"̲,p]㴯,VߩAD5Q b۲KBc#g}pO610c ҍygIAE&5X(dD(,vFYn I{*3됌VYb!\Qh8.%-Q-}`z^{#14tH<ܒdT+s$5^竕00#oY0XfiљZe؆-})`z)^31,ةv(D8>vx+R0zt !W"R`Y+J `R+/\:%!!a 1G_쓯lݯЫB7)( ݙd_e+_e@wBtW'}7:r-<ƜVTPyosMSw̉BYgO)Vy2] sÇ~'je"DNĖ^5"+rz w옛R"٭3a$ Lj{8Φzr;W䦫OBYHΑ/@\E,,nڪBcBPm3%wgJ|]e;jz[P|AMnPczӼM^SZ MW[wtLq5{R#DaǰRR,f%=c2tTXPڻ9K2l=vsh"\};+N1aGnI8 gKkNט F1E1!r+3,ަT I m8G+ʱtgP Y7z9+^@Y[ 8w.#I℘ ~ۀ5J2c]ʽjQ; g8 D pUpa!;'x$`~pc#- }`:_M%Mxp*=]vSh9Rn[H‡Po@~VO*U;M,-'"!tD̄q4v~]@ZA^-ŦG\\VU"Y#|4m["_le.JgW׃e#~8Ԥ^͗/2 9͓u|4jᱏdk#"\]ѥ4޳Ymm]sH4q4ƐP+N%Q:}qkb *1EEv}/4_jxni:R}* ϵyCc2TPALy ri–EZd< ѓ{OǠCW]ϖ'8tu|Ek/L":*mQEWiԢϒBg`=z7c,m6kdÂe A6"s#՚!nq2Qߙq jSsMS60M5O=$^u:Zx|Gܵ/3~V s/XеeoEI>Zx|1fXPVo좭'3hL,x|ħ w>1XYG>\%IW xҦ&=Kƈ%i ?xx?ϯEU -,,r\= jiU ~59aGa1_;# JÞؔnIP XF4>Y8ɋ,/;j£g*Ux=<] ]mz|KWv8r@eZx,vlE*t.1$[Cd:U+Zfh|h.3rI *@-޳8y!!1Ы@١]߂inѳ|Ǿi4v).޳882ݔc潔YtZd $dxX5WKVp Ӕ k{JrdaJp"RӲVr 1$ ~xl'u jlnѳp`gq] QY0 5,*R& ?/UwxΝbd]pK3' D,t :μYcd%98iZ?&܎28}㟽]n>߂;IE8[UHS*[gZAx+jOKodjSFO8cU^w-lV;40e$S|/l?+P&Aab3月o2D: %Lބ@ėMhۉӁzF˶*F;+;qjPI0P&l'@o;ʭ |_Wn hQ&E5N.m5 MHo&!#3g;+ޖo+dBjl!jqMq6[qI{ދڪ:Vu6pBM8?+My/~B 6?L"\Ww *^q V;Nn|.x6tUYX@tu`=DT U KjMl DLXϩEvTnZu] !E!n5:U߯@7+ ^a q*La9#*˰VL\mQ1+in2efPo"l13HufTn6ApBE'D VQynUbf>D6ՙoe22!jyeN70I-GѬrlx[sF|;͗?%M6aE[ط y}s3*'"4˽`pDqK$2;$"0ƧN+ xݧeׯ;Ãtacxp(y]U]=VF=( pE͟&nIf9uY'Edyls'Y^b|cTF9a~aR-cUZ=ٮ> S,dSKaI  F6 !jtA4*i:_mU151_oу lN3cCgj+ v--m1_k[G'DH@u [ c9I#hh93ӎ^*܎'e5tM54L*{ c6*. OIDy$4^%*34Ϲ=Y? ,!&y(IMD'#.DѡՃ%TT"_U.ް__%4l1v fО懛=ΛK胑f$dQNl tȧeJn:\9o ~s6j0TaH>9$3[$meӔ!;V_q+Щ7<ѣhԕǖCqW!'+ n _bϖcU5vASfφO~6殏?eg0;?ݛbR(җԓ՗,%&,_$tA8aŧs|:ye*́֍@`<-s sp" ba6|u8B3?Cl4w+טIv:^TMfSGR5K鹹z^%b\Qt=#GlL&Yݐ\X_JLi.yL0[ͳ.gC^o`=2tYJvtTgu6њWo[2zм8lbzIR㕍cSV8c!MsP=tg Xc% &>t>3H#0@̷.#.!&'6O'OZ,c>㴥yTDp b`pFOePXvf:\nRQк7Mu C1.J ֫E%XFS]Fe׭x4׷I9k1BY3m3 ꎘ6ۀ9"dEa}5[<(ߎ􍘽o |;V8Ylፐ1SG/% yx}w|1E6 [+L*2, 4H1lBsMpYL_PS_~$ע䟰qWZ܆H@ 99ZkR匘Ve !v-iXGkT4E1 P`viL |OAN ҆K](9sd`s߄pɤ*+X xAV~m\Ej \^=+TP;-) ^Q2$22.(wf0Ōؒ*Q$&l6="U䙦)ĺ'.Lz줘W*LD_dv \$G0o^N~˂1L Tl8Vd 1C{"+=1(A嗲1yv#o-mݽ(-/E#^_DiIƉIrP̓PSe83)ӆ^ƌ&2+g8ao^LG~}5*JzEmL%:sQF_R7Z2v d9P-/Xk`I#|HB3`9͛<=)=m^@Ĉml] ^䢽MnbayȟYlbPb]k@jKl~}'Z0K" 6YaAX;;uK"?Χ*$a *3&GcPG)M^Ĥ+{"}3rFGD \,Q䜧fE| |/OD+F~_m)lKeq 5!M#$&śH B[BLQh2"sB8ق1#s1i JJ@h .8i5*#ETM)k8hי3hI@'wv/I 3iV\ɠ"Y -,VF V1㶎l)9橴(+!-$f)M-LVk& (!040c߹RG ǡLWG%ZV \'4bvFz@ CX"UGbl)0*C))LeP38om(v E4Uy*yQ8yU&Lq1.nENXbqh[{-Z$tE }aeV3(Y^4(O,Ocmr pL!x¤ϨveOk kͤ{Wi\0OUA5*a^-BxVi8K`?OMh(Rah\wo3ڗ8:+w/ a|Pag} 8W;]VOrr~$V|Umzd2]?qUN K?~xwuk]2{^3GcpGVnJ0|݇Oa䗫^#YN4H'+Yfù70XYA.[QHђJ~̖Y[liZ$95_L~aiU}1WXA.䂐UɆ t`bn涠6e(1_[+8V5oуjߧJ}mM`ӑJRW3Ko$P{Ͻɾ.//?J<A4? ;2{u]] 1̧$;?hd=ueC;L[bOxϟͨ҃/KpwyX dz0|; ~}g5XN(Iw{Z%As=_$urOK`HiQ`JPu'xbo@UwrqC$+2I_kDRfK3*HKB)0b]~O-!,,޷ %(o-<7݅CUx8u_qC.!m`4qa* ٔ<8uB-˷5wz۽oh1HUϪkqcmlufa̟!A]$VgX._"Yi>y+6bN[x\ ^|ߋ.es59{_o z Gh.]2-p9ewU}hO%0ChP: &2͢O3Qx_G`7Y.n0bjZ\_*`hɲBy-e(e}P%h2ZOA2E8ǘbZ3;ٯn0q!D}{]4V> r=闩w>z/Ӻ s sZ TYcA-_+LՁiO >hVcFK8"$jOp<fށ@R=!xjNW&X/I1g{hp`b? &9<0}xV_b|4uee3#gkOda n6,lV r61eGc⥢}Xo=vd0v42j Y^~!\$QxHC"ގ~M[<"(I)@PbbS"KB摲P# 9Q(٨ 䖎ό}325\usH"|gƁ:^|,CRc<,(AeZˀ}(82+ x$'3 T3kC0bǙvN zj%l38:[&9'zX)PBɍܖO[d;$uֆ_%DhgF'G7h Ff@?%R"םr8J(7cze3#GӨLH$qόyD^,[Nѫ{3E>Eck;fs:G%ӋbsqB'{2NâOO;LDz@]\PB d]cL3Ab<>2VXTjxj` Ot]B yj?>' 1Π\"poRaaV]OQSLYlTOM` WApFt$LV K*mă4gd{42+îkNkY)]SPY4k"VL8ǔM0\{dV:N.fc.Ղ i:w3N{dVo|'Wr%IODHũT\evv,^##rhڻ{+5&80 QuiDk"DN%8zy ~&xuZ%*3F7QiB"(XK4IrRvK?c'>Un8>wrx6UҠD+MB"O˻CMfS w6<2Bx!~'bpI't’˷GF`H7|D=2*Gxm,Xw tikQg+G%t" O2EyǴ|`:VdAÚYinFEvt6  h(:i1ug1v?J9rɉb`,HK8-,DMIљ9gI:npon 8%BҠ*R5g6[Q ?=,(b"eO'x]##0ȁiiy :xR gϫak8}6,QnìaOˏGu8$x_O_oJcY*^o^<"hXpђ,T3n*)2%諲YS\^|]D&g/N8P[C{ e%4sLK7O_[VˋU}8X8S續^ׇZQesFj^l4N=e9ׅ/A@WQ=/kkYbj?׋|2F\_G0eqUG ?_; "/QSDA\!G094&9@dvqFElK(a68##rDk}`FE&`py<0 ǭFk/poY:ZHJd(8H+BXnFF`| =}dqAy$Л##r8X-hD֖=KGÆ+e$S*odi$aٍI4tʰqÕ8lm^V|C#v )e^7 cEv s'##rx IN`{2>J3LFMm>vX(P @WlJRu3c約ipz`x8t/@[u O 2$!b>9pL(p[9MFa$1+ 6؉ 6dD9Iu!@{Ӫe!R[lЦ}qzI 1M%%ZfEFzuD_B;,TjBR R &xIf]=^fE!('#[HԾ}?)Kݡ Ci Rڷ(ƛ9E9&לzC>fإ"H߻"|t:P,9=2+'=޺β2ڑn}6ʉnI\`Iۅ^>t.tZs##rP"nB}AiOw Gg+i~29aB"C:Z+0adv p7L犿PɩI[]z"l%/ymnVp sXw>T>r'ӍpݰJJ0~;~cOŠfF$񐪧Is##r|49-a;#=#6!" d֗ЅSހ P怏 H^rRv!wGFXiaZgs.JұӶzpEq ͢?[Su?h8liP #sR&J De:A~2IOXX)0’zMޮsϑ[MWv"TmՂi%4T<0*[ ZS}j <3A8@pʓ<6})!Lj h#޷`Ot ?2rzw##r<vI^~h1DžN ۍֈ'ӈ[ZB)G7nGWNW~fj'c;&Ž-"΃sҗͽarkO~g֫O}L$wΔHwNO ~0C': ~ǼS^PbZghb洤sl!-}.'/݌!0Ml=XQZ%D!4ԒL^< ușwߜG&=ydًnG6wǺ] Skdg-5{ ~rR~42AZvCe+"ݽ|Cr'qFztgV u-ֈ>~%عlnbm.l`=\1vICDf Ɣ5#^ЂSrH!i^` )yμz}e}M~`*DDuo1mQ,㝯w&zdɧ˱8iX ߙIw,`sDw+KϾ_ j4L٣u'@EG,a[Y5qhi yK)(j)J! svg楗[rE18LVIXfC;wo!jjY2+ױٺm\$}6[}8{Ch 4AX-65U[2E-7|V Q8DƊ)aSI&GSqk$G 9+և;h]ͻw`ZL WN2<|>xg`W]{"?Wז_h6oy@^1S'$HjRR JqV󂡔 E ,KtO4o2G20TҏXAQm#ІZM04}Qɬ&G飅1:l[_0 u pFȵ"8(HۣSY$O yd\%LR+c As L't :jt+p i0_Q>_/w`4o"%=EbfDʬ9T Wn] \ 0E(OҩLIP ~˕')x9͋k `TOԏ'g}Dv wL+&ʨ&bpm ε186LF>d*3pERR y0HQrQh X! tL>V0N[Bh[nz6.< f C VD8Omfl@_bǒtSd&,&~2;3{F<`|Zt?aqّDlw$b#w$u-_]-=Yüp"S0so`V/KsLwED}q$loW^S_۔J ܦZT}+zvY ą}o(ù]t￶(pﵒjy‚pÕ0ԓs]FCWF/3MVq#`Vgzbm"`z o_a:;\de"S~/#\wuVN`z-_=am!H|XTć ahć ?9iBEB96'nWL+֪sٺ&yCv=ZW/P chBhl#ai”  L\izf-^Wl |z;8;k^Dlm9~m {+F' Mg5,fEawQO"5vd5V>-rryq0  3D':O MҔ\H/g8;~/s>eV6]ŨMW/Ϣ: _;p’bX)fb`)foE|a rKVط?eJ!O') R9JA a~C)P b(qJ焠N؆:C uJ:Cgx#! !\&'_KaAщһuӮw뭐YKKIta- k釾zDxJ32W0\AwI<s| zD:?X VIJtINMg6+:3HUIb4y4Q"KkmH@blZ@pkgv7xH-IAHJ4c@Ú).ƭz^HgO{ iuVؓsV۴y߶=<ºڑ)}|}ɯ[~@g G%ڻ;Y f8:h.1_4}F,)Oϟ6Skf*|VcOtra'AW憵> hj-&Axmn~ҌkMqA&"!%Cz  #DնGlÚd 謀M2$NMPhϮtml=Sr6ddkg>;]ڠmѥU<CuRO?Y_φb;qD4sr_t!M@.,tSـXϚQ3 )zr3~OV|[oU :U%Jq wh?o } |g{U6LuJ}g/ch 9 uwﴎfmNrlI?;fM7l tj%&p+~ɗc/ϱ4[G^@.Fϵ =hXk/`:dTr3֛̭⠝YFUo!{@q'j525E"EspQ4L+Qxgf54zq[ѱ!k=?g(_$Y`d" Y{ױg:1*5bL bOwhyFǞ;ΐ|-J;w$VPjz`a{h5#Tq5{F`*1=Tu|ݯ^ORs'.hjn䷗bĹl)1bCnewkmNsVQqtǥqKc);"7 h46WcqZɿt[6覿 79zj9~,4u <)Cܮ=Eujv4M'HIZ9,35|PQՖjY=IS]6y;&&SHIAܚH(I` <K!t/OᚶNj!V>Q|ym6D*.d etq\8~s:sB~5M`ަm~ , >(DJY==EfiV3jgJ*ŁRK6Z(jLLX%tО8YT(8YZHy@!JyJ+;-(ezPz'Xpp:F G3ӆf'TϘ"xE @W WR,4` 5k.j;O3^EO"x=YV |%LM!PE7Oa3WĨC~ Uk)", kP ҒȥIpu/xXPD$M/BN\Uem)$Rxh7vV8rRjFdKz.0K{ ;*y%aRY9Yr@Z Y_h=q~!hux)$EV[H󵐒XXj窺SxEQV6E"x!\u%A)@-R"^k&2n?DB YF7$FpWdN+C$4젷 oRBӣzcK?@B = @N PE ,¿SC$ToYw_ ),N kc=0t&mhHTJkڣVJ v,pCpxXt6{=mPedg mYWZ:Te֮~%60r,D$EJS"xТ ddC3M\BXWN{&t~ 5 |f^qs9ÉG P紁-NK3o`Æ*K=!jTH.Z"!Pɽ!jy#f)^9D`H !ZK, U[Q; U$%XH60A#źC$A_QC+TJRDY=H֩]T5+ %md@%d,%zH U;Z{h:HKɤ% ICZt: 5~?g5s.]W rS3H /t,8i#PJ.=H^:Skt9Z ZrTxH^s%yxN$-"xaQ|×ttWf3&nTkX 1:Q-sࠄW*)N 0$m9#rqkd̎ >ٱh~Xb+_?"w 6dz6תcqWZ'ԧ[Ӟ rW~Ran2]6_ohe8ܮru6?_.`@)dί ]dn.HϠ# A?>,u'OOLN h9g$#Cf0-nB9fe2hJrQbx` E=!'ӟJ^pK()2 R|i {&D3VTg딅E&@4\.+U*TQ[=PZJR*-X_rZ%gkO\'&[M"$qUH-a*5 \_\(;Fs#>u0 x2Š {ºT= D oRN*KYj|zTҪl49c,!1Ur)Sꅕ0Պ1'pdap|twc@*0ڣp!\aZPg3)ND=uQ*t mDUOx:^cH@Öܬ^uu^oVyx4,2pgHaLS*w Ny.grqNgBOo7 XUONoNfwt nǶn83T w0d?wGhY[Ÿ;hdnپ@ {mīM^4~jbvFkuDxj6t3HL3Nha(_ {JbDgb4Bkms KB#n{3^7O~q'<2,dVJҐ6M7?=c.o?yq@?[{Yu~Rn 7贫˄ƺauyq/ ]zі8Vwuэi竻o]W-Bdxg쳠uF2؆+g6}/2I,^k_sϕAY"!YG]̳"B6zsIS5JamwY]ϛAcp)mA#L]LeEl/O&bQKۊuMˮPxJOr3ЖyN5 79fS٧{/Pckt\^.FggS )0OϮDAqҀ˻5止ntcY9xv| 6~k`E )}[Wӧpg#qomV (W)A op+y$ޮOSv~iPVGp0zNJ)GTi`!GC̩)ȩ>2ۮa}cY>ki9 WFkK5p*)cYd0KaJt8g(g)Zgnn|8؞<ŵw4s}'Vjfy"#HTOiŏ`lG`hfwl%[rRjIC㱭nHVU,VGЇt#*(MN},w#l%FZu! +]9 d3〕q8\q@s͆ C)t#OHk H.2:l<O=̎F<Ҹ&w4̡7Ł(K}9 yaX{))qJ7IJz!N%,%0liE9g&g[m^$ @k8b b"=B -dSYr0p uZI g ۉu4e0!S&aJ;f2µL5l8z<9|=דl&J\;/L}uG%eMvo"G'kYcG؛Mv9bG.!-'!"Y\#YES`F(B:o0;pi+uv<ӹ5)9tˤ"̎ˀ`j%1 $&2dbË n_, ]*~QQznaHhe $jku4J4Q IG,8p@XTrf?9/_EMř_Ajsn 7c͙ Y7$I p ,&Ϭ1?%0|wxv{  WdW|e4yibz.6kxsf$ON>% |;zӅG?SjXJTCZ^u?,eE+Œ3Nڹr}^>ևƮ /6uq,r};u(6˙?V9 pѢ}E$:9,] ƟG!I=>nɔ6= LlkcLg<kOzV{F*_u7g{vsoo!(tU O'&dPnW$9@Ev{;D]RLݕNý~` C]ԟW.wb(XڤHBVO0wmJo*4x#SB8[EDo5Yf+φ>塁Ĝ%+%/,g?'?+qM0ڽyMN`FSɛշwAO\2tDhZZ 6@4~4fz(SQbI^Z3?[Aib_a8WkQ%TUl( Mxkݔ_ޠMCjXoZ9Uk{bL6" bMdۄcJCظ `}[ǶrZ)>}2d5%ow܃8 [{5X=̟kq9 c/%U:6ʤl0?W Ri:Okay+ueIԭ Z+^,P Z`g|su;Ps3[eUacer7Te"h Tpڭ2$(?ZBE-č2`HN"lU`IMK:Jlq exjܥV&ڷxT $]&O߰M TW/讞Z }NvҀ?=3xFf T9tM$I0;s OD==X{18L@O,Pw{#m-xkc tSH7FLJ[gLxuxۋ0洞BGTPJm[(HM0&3{UDcu @/`k U cJ#68R$#Z{ReS\Tknlz{hOp8yq.]l3:XDzKEVQX`Rb Sguآ8s:|? 6w %-T X ip2hS!H#Q3ăb' r^N_֍cKy?ClYoRy<(ȱቌ8dp8J:"L^Fo/IvYt^of#;Y]2o6lߜɾa#r4f|qV0EA3X5"[%V廐Wr}ĔtG EέRȰ, ƸK,J))$d7nUFppp74>)hI?䛸oўͷm3;KgBۇ ڞ9V&cgmI ZrV8RvMX0Cˇ~=$|Js|t]0%Z \\@ Xaq1]Do3~?0Q)X`G _WPpe4%nևG%xB /lPqxlX2YTaĄ9#)\ք(Uj^!fkWӴlq[Ãe{y7S}VN6ɝ}9-,֒T)7 \בKI)&v-ou )00]n Oad =e:`TbԌY3h*F^ʭ֨TH`< kPB&޸QVOxsހ<*! mEpu_! 73ۇKy6fXU_dJ5{ 6AkzR!эGkvSa8,}iL _TAc;X,++0S~kR1V9Zt.j?vwŇoz*"j|mQ*0gjD܃uD69ɴf+n񗯇Pbǡ'@\VN5(n; ufRNNlXMLy؍>%>^ʏٺdd]D4C,sނ ӄq21+Ќph&pژOCi㴰U# r]"T2m/KxThgB!@2j68e;VH }wWh0BK$0Ozf&X kKpWh+p1 ŬfDڬrq1!OGuX,_" p_`$:_H^ t+J$F2qZP{;Uڢ|6:s xӺcN"Fqj㗢R]+Cy230m}{5'ܼvw׹ik}uj+2_HO_pθw::7:J$ >nbe\bĆkӯs_tq1!ddn"hn>.e=0§77e_,*lhq};-YKu;1W6sDJuu {$]@Xhm`eN'R綸^,#MYKk4仢FCѐk4 FCѐAr\5#3h؏\![a?jՙj4Ge#S㋨rg/D\!~FCѐk4 FCѐ)&=Ļ0+xM\ R[0`$ 6;˜u872?2..l=+0}'Aoni iij.-ƿ&>VQۅBB,ZXJL!0"~ n-l!|aIkA !RGG-' zuCaɠ/=!"}2L"k/%%\2NXYh6ꍜn(Љ~}ڌ"q_giT"rC7GS ۇL1F[T^;"t9x"<GG  f"5z{IK:GSg7}"\K(vɼـo}scwAv~0K3-%L 6=X5^rÜQ [Xca"|Ԉ,٪y.U&\q#ɕdxx.1:.`<@)aKlzpߢ=S-?3=JJݜs|ZBxLs …Vc0TcEq}"?{ޓmdW~1;; 69&8F[cvg0}_D(+`-xW꽣j BOК֩!oT£*i2VtT0Bx/\cJcSaX'T@RxkT+HVFHrEx A2 c4Uڲ(Rжpp-1c 3~*?!&!M3eq kr<]G5u0pQ8 6EubMl`R\ ަxNAPjG$0xf Թ:;'ylSO0,D Jb,#Fl`“ܡ?,2`dOYc3kGZT Ah@NSE-9,^!-B#,,ّ~h顶:ưH$;c)Z[*1VR!YF>Wܿm߃waor' }Ll4~(!vbe>2ts1:l^^.ޝ u!ݮ!ыGDU웑qO"t=y濔H]^^LonWnM%)$"X]VO~eķaYt /ʃ޻bGw̏^I_11ܫ%p''?yQjj8eJeׯ؂lًy`}\GzqϗOˊZa6=#lqܠ]lG}32c^q.|~|: mA)mjCxgŲt!lU_%U5fTl0n8T \KxݕlFw"~2uXfo雌}w` JՍ˦`:Osڌ!-e4@̆_ŨbE|˶N[Deh Q=&bL\ڽ, ߫Rv[yFriR[ʠL>~0BcizF㔂2Ԥ>6h#9 `(|`K3Qg>?E~OѯYS\L_X*s[NwrLN3:TYj+M  vf+O=X;i&~:=|+'y6bj,wܪZF e[W;.҇fZ1+*>Yr`k>綷(v;_ȮKZfHg/7 ?в%pr0*>%VTϰ6:0 BRg6$1Վ acgFv&tJVWE(&/x./Wfi;s5n{Ӝ;J-혫c<V}rZv6c=Xc}M= R6η̊I5@t6>QVvfh'^eg+ܢٛnZ1^f@3EV N2+6 wri/nwk(!_⹀^\TeIƅQotf6X8b"h~n;r!SN^ʘM9WWNFQ]TOLO q|pćJC'3E:Iy@8I-#4M㻱㋭.\5,|u*WׁՎ+ B{k;]iJn"-:å GcFcOS!`uiFU]?JpL嶌ዲx_V؀.ל^7 ~؛BܙI,y6\֓vW>Oݰ";yz&wX)zB)eG%P/Jid坸|v6?ʏ$yMby)-r//H='k{%m@DӧpIB.InA8f- BҖߥw_t/&\G^&F$ 7BQ_8˾`쳼ٷYm4V_&\ 5oG[}h{2ːz@~Ͷmj[Ͳta⧠Se3u6 XXx;2ge: Gտ3[2 ̢7O Km]tՓP6@^s- 1ȻWr]o󟲷z8`^};x8K&,ΔY.{eڮM]8zyw?Sit< g9|^Tp^212'"aP B) ]ҵxja*E[s+6NǜvOlC3*j]]_6 soU*)^tMS@UI[e3T+⒀bA;65+H)ΊjO,˭6p/0\T`hH9f(5Hʅ!mg Qh N5# ].QΆ~~e: OYc3kGZ1 BӀt*jI|XJ4A IC[GXX_Lw(?nkz#ae,>{1*E3x11SNyR鐐 |H2Qeϛ1y绎]a4ɝ(_H~3̑I{D 4Ě yNKe 9ul_\pC14O#*q|柳oF&@qE?{W_V^^^Lon/?%)$"X]VO~eķaY<22h?}=_[ןLf~|՜LD ^-;?!~Muހ&RWF(S,~]d3$^:K{}xʰ^V x?Vya+ZeE=Ҍ#uDa盼ct?tS FOre-CتJ6|k+F#`6Y`qL0dewxHe^^k[VRZ?q-Af:އp`\GW&` ~nWyԥjfRVcn!Yߛ @$saԢ7]^ -+yd07aZsvȂo z~alA2ג^{w%178ݰ82uXfo雌}w` JՍ˦`:Osڌ!-e4@̆_8\y/J;mUeààg&@G!022) Ε y ,%`(Pg!g)E :7 3.?qa8N)8X CMSn6rxX8cAuC{y\'E+Z;15>=x1ʽ}-͑4] 40SOV{L-bgDs.H0m⧳ɷr:M8xPmNF6r&ycCj݅fR:vEfyaH(u/2ԊWޝX[t:EH< U6lw ݆nva۰m6 TTyiU͇!o' ܑ >ii}vJ(i[Hq!!Hk%N{Q*!-z:^zǩ1T)-bSi^2N ʟe6(4OtT׌WAv/@φEz c`䴮F[ּlHa4\l5<e(,r0ohRb?딭S-3P w')HU+;NyHt <kKLSa[e#FcX2:Ǥa II S"!& PnR̙w`64{ll:c ǣu{+hy1tQ!}fB8(ͩ=wޔS .U(iWLe0GdAG;Q( f:my ) wd܋d_nY l͊blo;17w5^N:΂i%`K'Ht3:.cL?2]X9Х{QH k]0M5i ByL@Ote6!5SO@:801zjcDűB#;5x:/?i; Q,p(" b`yxg.f.MDIhCXAD>:k[ms1z#MԷƸ_UB'ZKhQA!TZahZJ(^I`=PTBb5%c,28baP\޶ < Kcֹ;^Gö Ժ ?G\9 WJ'R=#1~ 9!nP_Dj'9i $aDKTLA 3<c,p.`YLǷPB^16q癆~P " leY( ~7)Cٿs4eHhQ=pƨg59+޷m<4:t zD nRK0rEz}L^ʫ (c?y3~3jkdkTߦFV-kNm#K_fg~iXN3L3Ant+VKv ߷$JMjzcKT;U$O&.)=FH޽{Y&SM2LIӢWgBl'ćA(^/Lt,PnWw!xy,CM N$5*u7m96yUX,WP( ]  mܐw|үt[]2琨C[J;nm.[pFxK]K@TFF͛&e5j ;ڡ[^ϩMPa).F[%m{YX b1iKL.n/jNecmyl mVUCI~)D&jNCTqxt!gˁCnؾjEwI7m.>Cd\:@%Z6 nX{AyNJ(+{=C(,UC4%k!sOT`jgO EǬR0B/m*rdjq)3YF2oA@ L?8Cˑ^6^.gәϚcwDμu~ӥn.K|>>vu$.0%77?(,6)nY™t4+A.t f9L7wSCi?`=]c}[y27+t2C6D*F$Ԧ, 11tJ G/!VV&}2/,;@il#X8qB[2l "ɐ")0R嬯Hu-{z2|*_tBS~YuYO_56o[z2FQ/%Z;q8QqB~!̚ό;Tٙ"7Lj΄r0 d $35zyhF5KxIwrOT2rx`%;]ff<$K'Yclm@nB6” RiKfgRyEP(]Ӣpb/ vS_?}|D^{Y/0~ 7">ƀ 1>$<$0zE5 ?7g,1\C/ O`q|o@'AVOxf6nT{+'l"\ol t0N0 Mb4\[h"I.( cRX25W}$ H e'kale31[4:{ˏpDxRyVK0QRb^MoSo.ẖf=]J,ҩX?J,.mtiȌ':~@ң *('ړ"IUh"{E|s6EC-FU; Mb?R/'N7;?`4:.LC5L`w ߬Gi<$@՜aڐXŷ4|~s^UhUagfZPÖ a3 -gLjd"ˈ2Ȱy-lE^kOZ:A(S*ܳ̔/Hp8~ + (Έa"F7hF,G̣E dtD̸ .ڰaq}AQƕтBaL!P`ǽ2Y C ^ͨ#[U+OXE{A:&qC1eAF;ap8$&4jhCǶZFM5KM뵆uZdcyGhsUȳgw] m0B*.%eQ6Ż unC]j]j_ZrcX+ U!$C22AUJ (wxc_tT˱/?X❺l|B/eBa4 KOͻe7l:/ޤ[g:ޔ.d9 O3'~8d s!߼DF)cN$83{*eA!3vl>>UptbRI}ļ$r߯%)%Y?I(ĿIUe,V-" 7U2ϋGǽPHWnBnË# q#RM (өSL0p1%$5W"R_u0tz>IsS[{i71G6 ]j;6 tfNT8K=t,\l/Ni=9GA쾉t75.vyzy1r-`}d~(5,o̙^5M~c?i6UlQl)^oSc%us,^ !kz{b⦷f|Kqe{o?{Q*=@x|Wl}q"_m^ڇOFQJ)6%V?PK$'^tjһǸa9vfKVc'<6zm98vĮݫ~t9B8[לkbԑժt&}=md>E"tOx>󶹳1vyBO?.i2GF5? /3o5N ]G1|LG_0`-j_ |pY ,6Lׄ"#{ŋCxdq09T2*L||ّG*܋+2)XDIJe)Uf "%㳋H,8JlmKᕖShN3䐷:k߶kO.%*LŖEzsz9032` ׈@J9FԫF*DC!%2E2 |TM񿹮m3Q= Տ 9LP3Z\^ӖݙEw͌|YG ݢWgV'VPԉ'1m}w66MY[j۰y'Xqݶ t|Ȟh.mGKW6 )23$4&,O7^sVɚN,~t%2{0X|#JY>DˎtbT٬$@'}JU"ůwo<Њ(=CQxНtp+ۓ F].&E8__tD|.#LaI<0?;g>ԸRF!Đt> "ʽhfȡ``S)IHQ'Q4FD" vm!Knd' m *>,XHzJa! 4h+$@3'h)-bkzcRZgÜ|nz?2?s+yo o3f^aǏor;r?A.%s\ 7IkO0m\fײϣ*smP탳îR*P`Ti4B3A*Q.qېjрz M ϸ4^>R{DHJ%DJvBJuC1X qYvh) [phC*ұ71JL[H3uxAȠ`jagY\K)9uTq#Zv Հx|vZLW0`ƃͶܺ32r3#wd27ٓt5,bWEH[7Lj΄ܗ_YOZj+0)Q3υWV /%8VVyU85ڟ7T"4n<$w_oJ+s[UϭVk߆ׅ0q5J1"iS]wΌI[IABʪFokmXS~! :'؃z6Tˊi!)Rh$J,K3)X (1h4K OKD tLGTHT&D"FhC02r<%Q4V%Β*:g&@^DdzZ-i2B(6p32QJ * K[7 Kh&D:]$gs,@^;ᄰ}rfw#YfQlIK.<Ɉ#JD+Bj#bZ: JRT YJnV'.ŔjDxzE3ًj5IpϴSD2Ls(=lR ѐ-LFT`&i%Tp#V =2"S@6!G4s$JZ0pߐ3 V i DHdu1j*LAG4+2? y?L>G5. \_:2"q' R.?6{*؋wEVP?jmt]|ǏזIѺyՑڽysԜCٻZN)k5Ii 蠵,ES'77Qs#pzŧ7K3mpn$U[1\moX}-OmC=Qջ{ת˦]%O7ZCѷIp8.quUnq#'IM$BFq^?qG'-b̑lx'7~̪lyӅ?>̞vэ֯d[($9'\.ݻPy'#;4Y{vi}ȌKU|즗U|D_#Wx{P?|7&n[6٣EM(!j لՉKYJ{zK5{#u[/G \D[FT!&@&wIo 4ҚqɍAM&&ǝϧzl7ձޜgv^ 9}:7to.i{yI)=1Ř8e-&Y |l!*T `k ig~݆;b›"+Pkg^OʍFϣټ}$rB]d{8W:(ᕈ٨JĄaU HЦ5`C!,@e"% H*PL͔ e`@bLP"1HJa[&cPB(ڤ4Q 2|P`B@q ͝"0D0Z=$Ҟ5>mҹKԭZճ0y>c^p[NvW5_G<:l0s'_8'I4Dec<i ?{;$+)qVJzU '\5OS&v;$瞜 ڍ38Xܾ].n"[] x?g7 |\7a.Q^Cr-VF}kl'` z] 'p) @Db4,df$yd9pV*_f-q}J$fBZ6}y[azg[W`2"UMZGhͯϦ.@U ) vyeq[y67ԵͲ7O=RҬlVbA^&H~}Qaz]IJ7zb{lʞO7OסF΢o[9ރ"$PEw :-YBDICrH.DE9}l+Y=EQw'ndK,CfQs$cDgraZWȻougO_zC6цa 8E"Ӊ&$9P*( x{o+=%Ԏ6vd]}cmY zҁnUn֤57o}r&˽:̍x %玿~ݵ?ܫ>0qzNy[U尹kaz[R;Z96>j+^p\>Ք!BhJlc7yIV\{i]ĝ^j\RƧм|>[4r/YToiMA೛,u/ȍtլH<r¿Q0A*UJծRT*UJծRT*UJծd`/UJծRT*U:PvR``T*H+H+H+UJծRTzHG imIBQͧG~T% W ՋJp+19e;StkF8KSo"~u@ϣgڱ@"DO`1ō"TH4.X܊ŭ+J.+J.{%\a<ޮ)؃>#Z)U%pT:BDNj:l?KCG :4{ @gooaeuID+.QֈdLcAr/;*>ܓPZ+fq&ҕ[Hv2Q'Ǔ RH"wD9.)(nE~d̡H:^bm, Ϗe^GgZKlVhtb(%X!5R@eFvEItx  dM'+?4XwabUpõR:X6Ad" q([cmM̹(DhR9K.u$J4WJ@R@ z9\bR^u<$8M# Q,= \lUpa=MLp_8]yw1 ]Ww. okv";::~hb5;i>G?O\8hEU w$<?|\(1Bh¨9_zk-eo?G9.nA6o ^ml64^~|67jv~w5!!cm5tgr7ozyWD,P+|7|>zG;w\ŮUr\3قd73}X|:JzTբ_u~oU< 1s˭#*42!uT#m$%tI}(``ksp)BSZ1b"iZ+I*X^A2]:i3ǩ*d4kϥFjz S ?x'L@<p5"ZP:RjsP"<aLݫ2 ixk(E?zm *o䅻_Y*ו`i℩ =.N^lqEN3iIy0B h$ ԋ[=|zPSrRT$SQc(8+# IQXf^#uq"2G eZ30{-#cԀ'@D:eSՁ -T_d1g+5cJ + 6PfVo:z b/FOrIcvӏvlF.Z$vzͼ{*̵a6novQjmh~:WGwLMTf׫-K\,1mPtC]}jK<ӿΤQuۿ)o |s1f]}^ɮUǸ畄?:ݍHu 1gH\x+fY/oytJK*Ljd<)V2""&ZH0<)c"]4T@+hv6\}% na?u@9fE|Yy]ӘG"h@QAsglJ~`) NPL2]2焞ؔ|[|  k6DNLwa:<,{#JpJ9B>g ?Q l5у 6`;-U]ʋvFnu/*eƍ*I ;GӇ fw>Xoy/y{CnvK\}A>LffۻŸ.ͨpymۛ3FiÓC_|=stϑajloΒomI ~`M\4]<;]k5d&ւp!J L/_/ {'8s`́1Oǘ |uqb⅙bk'7-+nP;Ij/ڵ<\\#mۅK<] z<)wNRn|t˔qD2Wn5=`t~d>M^J}wO8[+]ְP5 UL$zjE v[͘Bb%TbI=͗U)nMN@ta s s GWOpgV|q8vtp7!y-2a-Z¡cE|ֱg !Iٟ t,' m?}XU \NWm?h >M_=s&$`Uu*KH_UsW J!qBi#qkz#RE\%h8wqEDhWoQ\Qu+z#E\%hOd{?(W,W+}Z;pTR.~֏L+R΋evVWӕ+͗gqe_rzhod?l6+z6g#!zI%GaA({$?b:+{'h;FloQL j rFB.b۳x;my$݂ĿD,$4FTM/Mރy0ra/;+aX7]%1_e- sY༺(䕒Qۯ?10 f1grH8+zD ֧3#mŽ9IT&(k4;h۷mQ\+Xܐ%pJ }*A ޠ!,?|͟xYpehqKʪG`Û͝7*c#D4FI,D %(*$r1GL z""yHmE Rf-'5aDA{e),X=X$0f3{RS'"NtQލ 9X"s3q΄\Zd&Jm : &6CĈ 1J /ֺ`0!5<8 n?MO%Vrq|].ywm:j?@86B*RE- J=}PkB!Xt…Y*)#'ikK `=l Ҋ[Api!jB,4ZCR &A:Bw˾UQKӌ5ϕ wBw(=(/zLs_7~eW&PrXVn,ܣ9SsfwL]iSOo9Otge VV N&%.v b]^sQ1VlWNm}m}4jpp3FnٸyB+oA`m&5&1qu*xU]Xb.ҦOWj[" י)ww{]Ҟ(RrmKPtynC͉ԜL#bo9|كWֽSzh0BN*~NiR['L߆r[Cy9GP.p-`C͠6^_}{Wf}-W(}DF,B0\jDꥦ0+?TD*_CwS5ځXh>Ossw]iצ{?xz8y-Ʃ~&PfTetۨ҅g0XJBDL$9+p*z[|>o~/M);S)kp~ &H}ple?ԛ`eܯ5~jgo9_ozk85ZJx,x!%7z{oϙ|uH\zKn7孈mi4ڛlz Se |34-nд8Ecx|3tYW'f)^)VCOΏ W~#st}tljG^%GJ^7 Uw`oǼՌ)$f^I&|iTboT)D`}=0Сqtu] 'PzfE͇1oG'<~h2xps kr+⳧%߰=SH!rDS;onphxC/Rv5.z{/;#dddo9K|@TNK ¦| $ՑK\r٦`gvլ8 V;rf5ΊC"{<,Z7G Vp).}Ԗx93^{-a=bew4r}feG*=r@nx4d@y>hda2'ZK(Ir yIEt$LEt!"'U2`!e_E.qVqLe r] L{ tRc" #$$5pFBb%#11)DXҠs0IX\9L0fE?;kE(򹖜N)bxЄ>\ 6kϒڳ8G`!} =1F: lp6%9HysQ b{Q x~K7zeKLe8u[:dKLoc_t4 |6&KjĿOϲO 9l`۴4L&5K[u:ςڽ6'a|,+mIP5ܨjj lJT&J/M(/ 2:U0t JĐ>|rȇwÓ?_#b;|ξ0C7?Y8 \._?|a*r\haP͂ |Q@&ҷr : >+8K? ceiL2LAa@XɨwOqh5tk[x_%hjJ(dC%V̾/P踬zC.`Q"3&R^d6+/LZQ2lVՎYe\π/ 䗠ΦA.ϲJ=@惜+\ʈqpkT@N6(g{ȑWu@ ݻ[`gO[cYr$ۉݒjRKqHV*C0bU~TU驫ӓUA^c#"G1!8ň [.xkB@RrN|UЏuLkSQtσB5JR_Ylj0-8[Ǭu >Mx,yM\ R[0`$Dc2#a_Ho]qey6o5oflA_Q`00 aVGHKa牰(Wk?-߿VQط *X0B`D$4le3TfB@Q '1B8!ZrOQj:Z̎n=q@Ae4OdQ'{%E,oIvz4y!4u2UЛvS]Po۬th],Zot[9FBp+ f) ժ d7BmgX DQ3oa,SsuJ#eݺhV03V*B)bi 7 (UIX:­V,*%N pVa.Q?JlB Oc*5(K)T%&zqy2m.!m8J\%JL8ܩ'G6"ᕦ|-!.Lò,uj1\P ʓr!5Hʅ1U$0(4T[ HH/upIJ9N!f1=kTD .J3K`@_<=&C+we&.yȇTC'+VeLԑLG/Wjj}~29{}}xL[``uovju^0?I:ƒ[>UVWu4Kz)?.{Ik.&`R"0~zz ?_ZW hشkpJQymӱfŮV<|s-jڗrZ5Wy)+iPՁm| |4Yd㫲5/w>u7g`4z#;c~֌ϱGkon"bAhp*VM67$lS*u$ڱFU13($y¯<>C&]x˟)FF&g-QqY0,&ʤl8]3f_^ay'~,LW|kuǃhLв_ARKP*q!> )J:Mn>헆 L}-9#֟+mޒv{(F^M)yun)2K@PU`#)Ex06`wcSkma9T,]FrAQ^.@olDY9=?hs51w}~~H2b=6 ^GPp,QNƞ^Nƞ29{NƞgHHE%a䂒'%d\/%;PK>dJs20fT`g/gs2='cس2W#Po\:H.&nrX:E/`ģB; iQ`)!qR1~ީg;Rh9:t1+$`1CĈ 1J /s"8mRS"7jdDm:tdXDrbrX+/B"N9!}Nۀ 'ӀǓB?メv?ck4O` p 1;a~5<)Pnدp& It$yb mLr'UBF?6:c5_{q{c4oz.lhS/˟`4{Y9&ܸ3\db͋rd&#QHYu™d3jX$"﵌FMFS9)r"/7~*=W*g|>*_w@>T5`u@FZom)IPyʘ9} CR"jxT~8!,ȍ,Sm{+%R.f\ӑ;N/XX:,=X>_J,_fe?;c`?m) F)k7TSX}~3ە^(֢)bBp).}*M*Ffd j0D?\'{9ʳt(]3K gڞe2qɯf0}'ylF&o%1sҞ\3%1sI\3%1zxPr"I%1aC 7Iq#*(#BXdw -2[ZWޮ@r̥POe0_X6#sF*eGW܋\T2n*eG%U)SQ)SJ/=g) gO/0D(B!  -,% } sn e @X;@Zd;BH0F#u|ĭe 얮SxXJF&#wJ ]^]Lq h]?.֛Nt:k%_XnͲChY@irݻ4+=6y}Cs-7x4f1;YϺ-{l_!F!sƚO٠?b)Ve,m.I:Rs˺yG"Zq`|xL"2)% 6u/aN(E f-EiCJڗCononUFN'Fhb@ ³KcQTIb~UrRaq6ȹQjEw9lVJHH!m_ZFvty:'͔6i phz|^\U.(` xt}dǽqp j+ZVdDJ}AhC1Jugtk -:z]>B1~NjL(h  i,:#H8 X̶ K*Ljd<JXYL^jʈTGb QHn!FΎck\0}{yW.zk"V]_o7䭛E eת͜ Pi(I*TDE"xE1rkY FeAa"Aە`1VYEji 26` $ݎ$;A Utȴ,]w "6_=XFMF:EeEo+fٍfTGH5z]\C5r?*Z#Ra /X3HmT4[M9G#9"k^qb0I-l%qߥgx{Cmo~L%YC3Oyd8R~z\EWOϠo7TT&vU䇻~rL[4@K]k^̝|kW~3ersrs=&Y@s>Vy8VakyD Zlan(/8MdSed,Y0%:Kh\]u,XY(N, u!L,һ^m;(F^Gj۰o8Y۝ȖN6$u.AHwsLy;,tb-Pl_)\O0#t1Ž%uuG6%G& 17LN;$"umn[R&xӧ];B::P5g|Κ (&xے ];<; oa A"RB +<H*9󷶓5q͟Sxznmԫ~KNv_uv՞QS'I2y~\R??`$ꂶ+ ~1U"KٮvT?{Ƒ_җo3RRelmljd.i+H$+[C9(ZEF4nŕӥ~w\\ 0@UH*GPǃOl 0\+;(׮\"ͼ-d֨RJ0k ȋQS=iQ$3+ƍ3zχI_>NNKSe>Ta{Q* BkK i xT:a͠}u8{}̃C!dΤL)1y~ ;_B^ >ȭ*ߚϨLJJ)ljF -},9ށ6 %c,zk3Іq+]Q5JW[J,Uc] U=؄b9]j ko3mH9Lf^wX@U`0.9v}N'QOKcHa,vb eF৻bS-qsTz#鉙dTJ#%1O9B+ifOH_/}S͌7۾v0,zꨨyv/ =kx{/:S2M.60b0^}gE~K12}oЙ~aCuL <c. b\+atCri*gJ}W=3q5>~9ppQSL` eh pIL`)o@ uvo \|J0LLJy!8x'Z#v_+\K@пS RJ/mPqPoS>VRصK.^c#"G1 [.xkB@LN!fH  =Kk/;^sXe; Vܣ6t0uusIqGuXx~xmDLnp 4Թv\2J u9,"ės\2Q ɨ 9@{Z:t1CĈ 1J /> T ]IOriRVh2H=r(|+qP1HX??9,Oƅ%9Až0SQ0S1]YW=x}R4"IF+J%ȸ #2K%p$("[jK b=`$u9b6+٠\ A Eq>|4Iw+yhXs fsad>5Nl x$ )[X1cʈiDk45[!-U\g{ wjVP pi!jB,"&AQ)cNc\ǻ IM^XSD|0arroDu}8Y4SӾj+[%7kgtinV'Y|(բ.49P]Yaak.rфdT*w^(֢2!bR!8>jK ^Ȍ ^`A Ɯu*[PgZje|\KEW/sDФnG#W_fYAizMtQI7 j-$* %^Fґ0хT]Ȁ;AvO"88f`f( r#ȁN1]p=؁< V!!(3uS/s4(@M̥.xliN;p.9YsbM ˪.gQkg`E p7 ̈́^As9axOCwB%QiK +W"(5|?@=4[h3 M (J\hL-Ӝirj4}~+4z ?ץ _u.dgO}V'6XzřBgq[2-nu0pgg egsE{`wW]%v){ZV2m(zXH >3㢘|{7dWkSCp_r`sI<=̪YɼTUؑ8%X-Ԛ6:a{3OIH޵JA1uV%8:'S-%]u:g ZGK!S1f4j|sWL#k%i=kK=k)}?kʭt,J\yj1ThkjyeTPH'^>Mz)w4xP;(zm%dw-&kE.֓hZX3MknQN>-n;PB䌂."W% S lZ?m^  $N-c*H7"p5Cvg'9Hg''˹%F1V'0b*ݢՇp;7ЛQ˱w^ԣm0e]定eI=|ļq ,?>Nuj|wZXQ/z5Cxg6tCjiKlU@1M V1ǔ fLW:5&krtN }$)-0ҁ7.&:j5X5N曚]Yk?abigkU٧eIdct~_U Rit=ķXQ: ͣ&S^~@rIiJ&쐫)wCB- i[Q_)S %XoG㗙}WB$jlV>GS!e0D\/oNS𦭴&oq QQɤ>9(g'#/" z#(wRKf[mZk@ȱ`` 56X6*DtXx@+_fVc{^hT};PӁG7D'#@WxX>@b}pa(.Og4a5.mW׮Oޟ3tqnx/4嘦)\u]^T)i%: [#= ?;P˞O DPU5,CY*r1FGT g4,$u\PB,ELY"yݩO ΃ SLޔJ|xML6znuwf@m4pBVzK3܎+N~Pcbv3OCZezuhM6kҝBZNѴvzTcKOv͟6猪m^Kٷ o+%>00` l}9}pU![f**aw ӆ[9?*_ vRWAn mh$mc H( ~lv́3ub;GQ|emƒre[ġuykyyU B-_Nj,SlwY6n6Q::/O7?D& 5y[Ty*հ?) PLwqUK{9Sl vp5b\Nџ/ J\{6^LLٻF$+^v Lu"_vX,``  ,6%lkFV%MbR$lYJFeEDFdF8Qn x7ł(OKG7}׫5 E myZ:毃 o|N_`zyZTGi+:]W zp#EcUcرS5v5Z5k9v?يF 1wFI &`6Pc`*DuBQ @YH TB CM{K0d{!t\ +lWݵ\Ogj|xAoӂ ʃlPjŒ@6KRSTf٭&KIy z_^yLS~= uY5N~1N~P]ʪɥ;e&iRaOg>^Lr8nQÑ(oj1@\alSۼa_¾|m}jpIr tX11 ?I&Y{ŋr)3̃-@N'mmT%3⩣sN_4Hw1q6oN,zc,/,]yKkoflQ k}粲{TRk# tl7/ug@;)^> W1Zx/ o PVFcQ`0hOW(;,#DEkK(:&jm0 GG,9ъ9EF AaFj`ZȄQ*ƅm`*Qf1],GM@Heѭ^-M|axs{G/no˶+_6C>f53-ɻg9CSBsb0Pc.)jtGHBE-kRq`Ubj "`"Jlr 1՚14 blqyVa =wU^5,).yE,Y t᳋z KBt>0ѵBu| (ɹF!Dhb$ YVO3k/=ٵٜhGfxK^Ҍ̞ g; n0_!YZ'(k'݉y]߃wyiT}n=kw^5ne+Cѻ{'se^x3˪T9Js8pmh ;W0y!|ִP%;3{װ62aV4H˭˝M!Hʑ' oE&&4Z*4h.XJ:z#K;N!Uo` 9gTFkN*-Վđp& ~ϙB4qX'wB 8(I=2>2%>.Ut ]! ?//-p!ڛFڱƢ6QQqDR *d+H l> 4RL+J-&Ζ==-ka~s}8یxs;OڊڔpCaW5٦_}I1in_fҎWN.ҍU B576\Nk/MwuAW據ύs;;}2%ϧ#\^2n~ J?gA +\~?s-)՜e7-=._atJN4-FLVKES*9,a'sŵT^u8,%gU[>{^h+qi{[!=){kn*TT;`sGZ o$G#uJo=5ѪD2uAybPV2 V;kH{M||]Q7:m.a_YuR:Asf>gOy&IVJ\i+Msi4͕4'wsUi\i4͕4WJ\i+Msi4͕f/,lyϢl&ߝ ? RFh[g}"prq$ќc6]Llr7`9y,{Ԯ^eS_!`fUtF x"%PEcQ*ySA #[2"15%JP59j?}sV⾰rBQ_~ k=^zz6 s}UGm@v 1njS' 2G}Z.<}Wi_pL|62(x +1K9w@owPE΍4sTʹ]9+vܮ;rnWj7rnWLS9+v5jsrnWەsrnW*vܮەsrnWʹ]9kU9+vܮەsrnWʹ]9kʴ^Nް%|_4-a9‹fr{3LJKkj)oLCL@-  -2/)].-v.p#PPpXb#@wQ ygmPɨ`%Fj!2p=;Q]E8e!CiahuvapwG-Qj^ F?Inb{G=GWъp*%abڳv~l僳1 w}g 5uj2hn֐@!r݋O5_hqg=eR"#&!RcF" 4N )SVc]b)ZH6!m`)(n ){}`!I(I D+t&rkFSeBEm"o$mkH<`Cӆ8/k:} >qnn\kttH 1wؓa2z*]YZ+K)Tez]+Id.]0ChBc`4ClSD' B*(p>RP` 4dRADp1DAՎ2ј14eh#C7sluM";nڙo\ݱ* ֏D\G͈[Mh?lmؤDɅc5Z5q/BKl`AԨF".G)xׁyQ-8:ow\?>׷+XPgY/tc isr#Ĩdbg18A THYʕ:QrT1ҽ%Czh#D2l$]p.&˘VrB,[J/{+XBHG %߅SCьKn &N51n)q6hr۞%6pY^nAIhr{-w%}X47NN-wyۑΌTW=}T)zC3aLdqL({JxDD8ZɈBraLH2t=odw`RcPTqT HJ{blۄ~<5L{m|BDv|?}.h`8^'o90< 031βJ{%A3tQX|LkS"Yؠ8餥G$S\:4јB$^•_ c5@ {Wm1k(!J Gڤ4Ř] zoDTQQͣW&4Mg΋6bZZji=ۯuQ؂?nFx#\UPITNBx9Ġ;W+)B%jh[P 8"Ad5wŋk}ů$=ڄՅO{֊:1uFVX&;$'Ie*zHT{&5"XkQ{]'T (4LjM!F \h8^80`2W{)XDეblYGTqXYKf؎ӷ7ǵ?"+Nb~ۄ2tc06$w:&IX0O\*)3, C8^u 6! ~ ~:~ W34{ď0qI2VHoTED@ClOnn{8 kB~筧L8C6OJNNiJ 4d33BÍow+?'p"otv]=?i?ޯ}!7B>мյgX%s5=TٻFn%W|J1ߏ nr/p]`s!;i+%$gܒz;@lm4cd˟^ѤD)9zu?>kI5TKCZϨq-s~NJX'{ RЏOٶ@_ʹs_l<^jeyGXn?/4w]wVIyçڶp "^oآnwa_*R3Co0=8v>]ukDbVk4ObXp&_jnaLMif9|yO m}l>QsOq >woNDZw參ioQ˞ɼ3*Tqn>S'I21VTj˳bෘI>-5{ۃoѰLJaQ ϕ̟[D  Ofl}L4CI0^{O577}Ӱ"%Z3?|ߖ%BPAL/ϐe4x=nk>UFXimdseY8QJ=' 5NPgz}㼗8@֑F5^SNnAL}ι(1&8 8DbˬNd4ADc|azpD]pz]t0縯y5AwA]Diϼ"M4y<e"$B"SaZ+igU5nf=AٓU' dKΉd+0`7v1+[a2YՂ6\~JDZ8a݄;yBBH۟(xp%*r/u*ꎖmjT- j.Fɕ0:F %27Ȝ Zx\ TUV/ AA='( &N`h$c!!\٦)窃wo -*SZWWzʂ.]/:Ғ!IZ*2ٵbFkģe1j$*-:Օ=m)LB$&Ƴ@$VIsc>Z2 "jn(C*&Hȁ bT(.R0G3,'ַ]=5q6=Oάhʾ_Vcxx92,9n //bBヒљ:Fvi50cl2PH29. /JeŒx[1 S՛b+dfm Uw.7+mߜ(Q(rA|bҋeRfi;jYJjor7+I/>Z]&uWHNvrWjw:w('@`Ÿ+r)*KK]e);tWTK+Sr1*[?swUR]CwŴV^5㮲D_2y*K)Hޡ│ϧv>P'bB~-T*cJyP?P`0{&Ko0LIAv KorxUBeQiv)㨌8VsU.IHTc41:N :gЄC0*YìN-&ΖzU3'h_[5xf# V^g!׊6X;.ym<:?NO~1r>gFкn9lx͆bM}~4BK-g}ƛ^-\k^ƣњ]OmtXgl/K-o5Ls՜ut[݆{S'>> Tghd4|?[AV" bB d@`t0@s;;,#D9D `P!*3JR1ȩӄ]sY39U\$@hM,(8P#4ZBamm¦eؚ8[a<-MWJ.:vY0Fh1deZ1x %q!&#<2M x`% :Qdڵ`oHN;kd4@hi[A2I.$GKTx,UضlقW7`9/5Uwe8[S)hqEY(Rר$ۊ,G_9w1G'O8j)  ~m8Rݱ9=_?&;%[ً}xzzC\o K.3H_9XPx\rYͽ˾,3ɬ_ܕҢY{eӋ9S1wo`jnnV=j/{^t ^ ^+&5(X&tXmRP N98flis<'g LJW7ivxf-s*(vcP;䝝aN_r.D_fCyPY !{Aj!\ 9h3Q#5"mG*Fz(>ŝӣK]Pf쓺}Ew HLkB<.hMĒ\To$ց}2H-Djv;M(jVAyl6֊l?Cw7ezOq؛B>@??zA.|vJ]ԗsn8z6,Kz[΍G}p6ip(p6W+'j Nm-YJ;Hu"Hp$>3yЅiX)@L'dֈs,"'ZDk;; ;6; ;; X4!eJc@\D逕18LGfQJZ;98[6/si#iS٢tVwh 7]MstNDqqщБBf8 )A 5 4`dH9U2Tvcϔy빓 0$c\ KS–(,cF9Bq`#LEw4xh7`kYdž^~{N7Aq,b bۢT2 qR2KɹR)ﺂ裂Z 2UeJ尃" ҒNwL1 u8fWh]ɛ"C_ R#`k F>xO캍=|[~}%1|pagxL%mR$vㄹsp M9X%R 69k5Ilm_ݒ#}xhz)IGfEHљr NHZ0֚Шso]HG$E2>.H-DK=;E`Zs"Nm%f7T//㼮qL\n?'~gra8+V էoviUwW>rI͌Lٽ|R)[ʛKkO֑ \6,Θ-]'XPRDp](J!$#\ObxQN[$YB T۞5q6{QJ'5L5\9+,.(";6U/ ;v?klUA@1s2K$D/XDc^6☥ Z:?pDT;-N!,*04" օ$uDemb˯ސM03bT0;)zyQŢ+YE6la:K;](6~U]yW,gʢ d)'̔?J_-ʄfp# )Dr,3b1vXXCs!s,(9;h~1=i11pќtŶ:b1 7TX%V0D:a%7n#B%Cpr`.1~Js3#U3MV",vWu&bZD:t2Pk)HXG"\JlB OQc(,<v{kY) d' BHï%=Z uqm+^;Ei@2fZz4wκlrx1\P+C44ZNB!5Hʅ1U$0(4T[ HH/_u8y/,(f)9IEd?04N+!ʓ>=J${< {t\2hJ-Mi@âi`G:`qi2Xpူp~1ݱQ0ptq'<ưH$&xSG)OJDT:AjW.Λ;m6 Co҈B1}46g@'GZ!K;s~^/+m1wdržz~>{v>`C^mE'+Vה#)f엡gUJS*bze1R#ŘDTik:<,mFYʡد ,򾴲+i8c7߅+Zs>]3!F9TNx/~&uـM 2%ȼ/ǚfH&sRI_ʹ/ OŬ/ ;҉?쒏8wGQ6#H6P(N#?,MXޥE ||_?pxk~̿utY3>)bU_>%aix$l&aR! l\,P ҢaY[˹jMa $OLF!L$bKbd1?W w%┫ ⭣Lʚ}5Mͣo0tB[5:UC6dZ+([h@TXn=q92.Ļ~>EI0^en|u7AWK0d3t~Ko_%g ~tT TUöXzA#A=&VG( e䟽&m P ۅ(kc>aLvwˌ4{c@+c?zaeA7i iqܨVM4%Ky!_~ t *Po/>lgV-h!Pb5~ݑ\= \:/ B{Ep{;$pP(\yHwri/ßŪWǛX?T#sF K Vfޕ|$hk V98<+%)P4{ B:]q7 q41pUT ˼p +",Ũf4DS.ƓCYϙirWFOń匫[yN`A MF0%+h4i T 0)4a F&C < 4>aw==# D`KGv2x-aHSF H)! ZF=6X q^}h[0]A*gP br6wq k (IkKS!$G-#Q3ăQK!V=yd.@L<X{ 6 1Y,-,xGI4 vkhxbc;^4cJS5[Ki:_Ԙ\K;F(d`DF0($XFD=('L`:mEx+[)5l9,[MK(vXvY \o4Zof*xf^1=ia+Ť)YnWRd~y>8^ï7t\D߰jGQ )(FsnsIŎvgeo=ْrw,ZmGᄵTH5XDcG}Dx!lJU0VCQ{@!I%]Y/Tj4vCP q`8-RLp72mepЮaN47Ƴ 3"`p0Z- &:O`Իj1vYe:| A:I9Iam5qU)TrrHE"GTP9*[@KBq)L !*d4&jci-gU4e PAiL4A %עf'[5 `,x <;-#DFD0`P`23R!Uil@ rH#TDꥦ0 2&"2Bl^#k\~;~~<'.l~};}xЬ̺Zvr(~^Y:$̳sW$DU*m"" rL,Zi0e` }JӘG"hA4ʌ :Ebm@Hr>KIw)2mghg}rc]ދ'mOztR+:H%'Z_II5ߨO9ɕ8pq?L -Ԩ3\l`T@/oyu|V{SʟMMVnOg7n-L6ȷ׽B官6Xؿ6}1g*]EkNnNRn;)w* 4˟ƀUeꮦs-h=DY0;<\5 f'ͺz}dB197,//oL5,tjcI[6 {v;Ӕ0͉limcK2]5zܨu)r+&n{^.KeTf:7eCx4},qoIܺ|f{)ltKU LRqm&fekö́1yK6٥o~vdi*-l։6I9^T7oLnwt]&atiFMFnnA5 ~gl1;-¦_N~ESH>];W#KN0[3O]wW8L0'~lzy$"kY"M1ц3ғݣK^ _n?ut8d?>%`lJa^iGjdY@SJx@UɊjZ8ug:_l/t0-> lnx˿yeO <RW7}p=jT|^泧)=}1o*{lL}_lP P-Yp2_TGTcpBΕe.er 66J!۠LnL )&KCeH?{WƱ l``9[l0SLR[=3$ IQC@Ӝ_uU k&CʵS>DBPRPQlL*r1F)!XhHlo*΁"nͶ#*J35>YwgΡ)UŚLY4{8 _ҁJn;0ڑ!hpRu]8I‰yf,:Z 0B9IT2Yv*PW xTlAp1,KXf(#A5>Gvt;QB+pRZϭz֕{&ǪѩR1%Z^f9H.TZ^fj>J]WPh~lCvPW3)d6d+W@`0[J` ɝ).?u"4-1moKE َY]|;Њ-Xk뫮x99\ (Ebh18o3 RrJ@mrpk1Zm$[7rh|#IGnEHљHr cNH\ њJoץ DFtT"(Q$wAo!Z橏7q*N)}.ݶ x1.=yy}Վs7cxhOvxG0o˘EI-/dviirc.sy2c2cܺO eRysduheetܔ>ps&=%"AIQv%*#ZXtH2=ve_l :%*I% Tے5r6KZAdak,T-B½-dKC\uLܟhvp'P`e{ح5OfFs,ѸT^3A%Ĉ6`d1t6ٝÖ6:9t6T;-(9j4< "EC%j BD -ma/;u{A3"jpjTL`) P!(Q{8ŭ:uH'Na{ņ:~UmRDmNp]@KWSssTDw=εi?f(qvϢ zj A%,r J-!+uhg#}{qK=gqy vrUl ;i$NDm3ጆPLymһRkN!"B31RHp;m<8v(Cd(]P`HBeD˜Ϗ(6fJLueGg7͏J!5Fe%Oe7-q&*ܩ,ONy, iUJ$E4 gYڀ9TIK~p3ɑw,yn&.g{ǂLϚ^V*DD(@{rsQX< { ^``6OLNNQhZĒqNsyT68B#3cv軭鹶z'"ǫ?⮢"RE+"kmN"5^q)!v@jH޴+7LoT ?LUu)WuZx${d Q}4wU<~rs8fz|Ͽ;Kcrs{?OL9oT~C.uӫKݔ?nrAӫ9%@Fuo5c]yOa0r AhB)Sߪ~>^2ww}j.& dJ [pI=L?c^j/ȥ G_ݻGvd<,65_j4K?>dl!Tk/k_VkY7pb6'{`SցMPΥz؄dU}eQ -c9W)?q_bW%8.C ]~3f:j&2pZ+gga媒(SaT-Hͦ?.㽉by'Cg[>5Kmo4~~'Lp_dPkcc3- }˒\L˰.l/+eDz}쿲dc 5+]KU(H9M}H 87ꍛη^DD3l+R>/W,(%^MjL~Jw .chhTuHĈ@+þot:W-m:$$hPI,.:ᓱJ:& Td#'zDqܧXD3BDLD$4BsMx ]{'j誹Cs!>kM>l 3_/VKzږ[GlwށT= 4-Ep`Xva)8-BƆT6T޽(n:yv޶^umMzOښ.ou MKCw(o<ɵl$."D Ǚ-$%Loш)[]+ö2 Q!rIgԲrBr-e,:q5UǧpS{jPQ;?;3A +NBak+(V?uJ;)`>FԹoCk'gSf.]Sg031Mg5OOvj'Oxp#ۯ42<,1K(c?]M]Q_x% %f|[qVwoVͻ pZ4-?E=Jln3_oK0 C>O/CoM@9$G %p) bTm7`2Qf$_,Z#ՁHOGWi~gW6W6E\Z4e>[8t6n}aܝmʹEs k?Grl>0R$:t9J#p7 wiHǫ0]k]C -6u:Ex<:\>.KCpcYwm3b/iU^^ݘXzݫm'ozԖڶw āyAdKm2.ctx81I;G{C憑}Rk&SNLS~53A.3 ^CQ,|w6l53{&)dpwaQ{f8&l9gRޮyl;XѼӮYg"N:kWk,ivƳ(Y-jyޖNZ8}<>Db jE`HE,#2q&j2u"E,ʝS_w,\~uW}])vRTVSNG㷨0ev ⲅodX gN^ŸNz R)\ s7`8m]\fLJ$F*NeN#P8%xOq!(\X[~ $ TR$2A3⩣FksNLIExj[ikJ])tLjnN+}چ=3=h_[r׏fjÛ`HMhϕ ԇ ƫѰ +0!BxLQT0<JREK$yeE]DE B@ePrl(^"0)U qo?&FǤ >,,'uWU8lGiVwCaou6U=%CY6J4cR1AZvh*BUH]{CErY#MKKm`uDʬ&Ht*Ř[UĶٌh4M 7k';ݮY&KwG 6EiT+:@%*OtTr}*2憧nTak,ݦ{Pa۔Þdޡ6I/aQޥ|Q{.EZO-)aB;Z5 2]T7 +~N'|Ta a3;;`wLxHdI߯ݒ%zXnYRj]$U⯢EuL‘dUTKIN\Ҝ!۫iÒaqO} .]1ITg@ M\9md7kh|=ZRUh{<[|]]|0iSJК3Ԃ/,ԆąmJ{@ ZӪnPJD ^xwم-%5YLΘ aqW`0ޖz-%*J>u| +-wJbcN)YLF!&2Bޥ4IZkFL# Жo:H*yDנ@6!*^Jڸ9с#-έ8xHTia&P:!be" Dd .U+uR7>H|pd(΁A)2ș9:)\YiェwOZI~U1XӪ<עOx bk|&LL$tA%xۢSouV禃Qh/ߜ5NH po9=NU_|vg[cJ+Qt99INɥROf6KrE.:s#w;+ErEPHY;' ԽY)|?Tc eØ/Eާl$qd.!J`9 3` f犄"ې0I#/D!'B4PkLL+r6hz7Ce˝}Mʕ. 63>66ht9Z'ܯJZhqߞ'8[{ZTCMgSߩV|WX$N"Q[#ڜ'vϙZ d&e )٭fTF̠M, Փ a"Mt+8GpTך3r6kVAtag.B½#1/Jr\󰾨5É^4O~Ӡrzx>5cˑw{cd(^HQԦT:( 6L36 !e+r6kl?Xvgܱ6kmkނBUeTo9dq 7ȤI Ă,E61@.ɔ"Fu'|!^z[1[PTr6+'  $2m7K>%%lBtQSVQ}89n`ٜ'+&Yrv4c"(irϪylX1^)rҚsǘg蔉s#qgD^3c}lw=%bNv>Vaf"[+vLMI w(YƨLfZ2Aa&T Kga|hj]J\%h8gTLGmyE8 9[Byhk3 *fv+'5d.UV,R6y[p̨);= LSNr:gSF$0LR%ďb! $^0[ uvsfb$Bkׂ6B= '$ ќkHX$͗jcPĸ(5pc\LX塅ugVeXڒJtKo9# c/n֋Zz0}˛&_ƓWϿfp>^ɫ߾5WwČŵf 1֝p% .k4Y՝>ڵ`i㏋qbnVxKn)"[/ׁz ftu Nҧ+ޗ%pAIR{( Ʒ a޿ Nܝb'pf ‡Xgn5s"V+[)f+A%ae¥#lzuu†\J lH9[s_A!$ ޕH~r0Q|0hUiRn>SzeZ@3?7 Ҳ{}gXi9KխZN{xu0 ~?;{C^ѯ|o%Y3q )_ Dfp.z⸞ׁƘb+r~u-kV)4 Xcs!&jԌxU|T!b.Z_1mZsp6o٤=Aկ>λ;oW  &,f12c$mŏ+&nWR#'zbKrqbN:jT~UckMa]bk U&`fn,r>bv$Oe^HncLRiM܄'vS ŝ v=R;zh$e٭AD{l>o0ychc2Oa:|{w|Ԧ V=mhwzABq8䊴 +:C4BYZKOϓo|Vve2$'+/gBob썭a [-TLR1n\1 :YJ&S 18٘( Ot*5лBSf☀ۨPˌR Rd܅KuFT ҉U6=fy[VnՖ\97Tw|j@ EKΙ0M!oo @PQ̳-~F@Vue \/ok>Jf;ib gK2xbKV{s}7o4jE׬wsj;|вb(+@+G\Bвqe8:r,뎪,w(Q:f9VCrDf'a*"HϨ,_ '{S/l#K,0- ѺSo,S2!4P(A|E xx) !`pޫL59{8w N$C@Ɛțr"Q KH ଷ*MFƘ )ij*e 3X)ȹq"xz:As .{w^x#tM.Pmia<:7Vc(Fz[9IЛ]jl4FLoWU@.J#بc*=\k jׇTzB}1v]z.A82IS4-B]d|@.QR4~ GwuBjZ5mvw`'h`hœ-:E[mraQBvJB"r ű+^]=Gu%IDfdU!WU`ʚ/ %7 wU[lfyQ32dZ]D3g@#e Ekw7j t!؊QDax*jP+tRVTӊ􍚾<z>Q/FX.&9!j$GJcwF\=Qk9.S\/bGm~L?_\_mlCS6Bu?~~UGh. ;bⲜO+7$k$M*}iiQ`U%8+<)L{ۧ=    z|FpFe*B: ɟY6i .ns$"R ҺYF:kk3p!+ 5Ƅ,Oh]ivFܫZ蘒p$CZ;jJy͎54|EtIag6:MOk:LZXnuuEޅ Wn,[⧡3)1VB&iӨ,q|]oHW|-~p vqss[fն6HrpU,ɢ$ f]z0VPԠx.Dީhi_CTDTj&`q"2h8xbV$4 6 ߪȰ_ܠLrj|xԖ eV1'8$D\"e 5rv { ? ap+͸&g[`݂J׍r4m^WmiɻXf?NβIPƺdeDA49G\z/ jOƣ#L Pc"8X*7YR4-u3`sRgL֌ɠCP@\)t"g;Y_s}jʓ6bpo ^oSTvEgYƲҕiݲ}.kW-)ɝjQs IH,D֙uߪ)9ۓf8=$?]U+zn GCq?d3^Gߛ3>EcUj>Qy\JJK} g}U&vM䮙v^lm9S/5V,Ȳ7n7,)wY>sќ/ azLJha4) gcԿ;dAfe!W'V#bc%o%///L ,ƲmD=iIm[mNdKm2.cn Q'9I-NW`WeAb!3#-[\_:cnyI˓jj4 NELv(3 [R&ԥ_8p;JudiWUZX3mh'Ul M . Ӣ&݉I {.JG^}gK LLILR\Ѓ696E e2A(uL𜘜BBk7to{Sݬx>g|j2z'cn43vȘզUg|b)dǮם\,M컫!q?L.|!/&qBJ%D1 R .(% +'Z"Ü˄ =+ =w'GWX9^: ?>%`qc( Dh,@{Ђo[1 B2*ZZO6T%rNř{߮a!.X t( ػaojx?B-?\9|v5i꫹pt7lP2Fm]᳧=}1ot_}lPٜTjdYXF^,Ju"Jhq8R^("_yjچ.5jLB:c e+!9e.]+; ?vv@~>Z7LŔ&42%7h zI7ZKb[bSqNLͫ9(i#i4)GsolMsx!(UűӎN)r;y$XW\b J}>RK֕'Hsq8(h2(-Xl`ER8Y+,.iLJI(DDXJ1qnD%r(BvIuX_I*ln0d'E^rm);eB<3ue?V>:DaE_bA_(6mCsQ -"SEiֹ/? k3̽p'0_ c<ѕ|3kF{O=&s@ 1P ༷T,eoa(OJ4E4d;x idRqJN1!r!$vpі|EI-Nui^'rkrS}1vC,/%dRy{Ydu0tgDmܖ=pq&=%l遈Q+ (xQbLޓ\RW E,배 UQ*K(ږ]2nRN" [}enYN%<\=nn lr&ryw -b}vhȹfM0} =|Oষc]v`ڛ՗cRV0ΠayUe!jLuL5c5!5yߛGN f|k:u}o4~yLYЯtoi%#Y38ɨ Hb%x-4u~Q_)Sk_17?'7%|7O)i$ 9O4=>Czhq;iSCl 3/Kz {wމT=B㕔Epsxqs_ b*rMqb`BNԻeF?u2)z*z l]  +:v։4-AӴ&bG߂I r@/zC6цa 8E"Ӊ&$9P*(:Z{;'>#Ʃr+!b-08V%-vɼYv͉)I6F_^4\o) SKc䅐JShf !N?W fgTTTKcRAy#e{$h)P QAp3ũJy cPԇZUI6D\34y- KC$Ec#dHі١rB}3&ux;HxjBSN&A%Z["J)Z:(ztH9[E$&"ek4\ #2JɹwTRjܳcnh-FIm&Yy#0Du;#!佫. vX1Fۚ OL|W1=Aޫ<)[;WlxU|M^M~M89)‡yqp؟5Wnóȵq|ˏϣְR;~|T-l".JT΢U!/qV.Kw~_o[I|zc /^x!/\):M7Vntx?A˟fNxR6.{Tʧί/UmIB܎go wK0)1Hȋ[=3HghdXS]]U]U]ð?"H I/ԅCF($60/ug ޿3a_T$K)(8+# IX_D o#uVL=S{K-zg՘IDk1hFs+@llh׭l(loߋ}9SdK}̮>7:_g W6Pz{3*:zF b2IN`(`tE5zhͤ1ktjlFNz n>*ykcegF`bɻMkބ7 >Ǟk&bңgV¯6%'C75D]1?AwPڼGoո}wrk]'r%*4njatY3Ng0A'H;^XLX F`2 FYhS( EĖPƽuH ÌМl7k|N@mwaf= :g^d+VsQTmjX+0U^̬ꫧWWt\}'p%y-W ZTWo9YIs&KBjPѤ8RiZ ,IP 'Ж$'^{EgF~aS@H)}sF+89xZj$VI,)fK/HZ{co7jGsxغ @bTNӸ^\_'l!`a%3Nzo}~a[ !E7(<.k(_}Skۂ vuyɸ&m4k-`%.Cݝ޷ۼafdY7-ߜ% 6p 挙_G i`fRVPn|Y-g.̖1[<c.X-6e#3o'JZ?Hz+DZu@muVv=2!O׃^NoԵ3Z)wNRnJ,8\y0M3*901{Ž5v  K R]hRA ڠt27cz2ȤǞyڰdi'4sX' rh'5\}l:P^5joǼ'|o/JwvLt#c#g:AXCT*JG/)m,1T 4N"Xapgx|]o&GT'9/0٘CdT^Vub_^' dPɦLAaJd;*LM{/aN(E f-߭H m]w?zR+˟26r*=1JDcJɎ B$p [BǢ 9 2.補cB)7Q( Ȣ`;X+\$JOJ[6p+mRnNV.aާ}}S7*=`,?[DRwRZSRi_g2LӖYw>N={ -*c,Yl4vI3yx,'K1 HByj{Jlfg=ai FBsGF'#3Gީ._{޽*;=T^_| H]9” jxTEpF=Fib."J<y%P2zE8=>e3M*Ffd j0DR%c6pKZIda6U̲P C7\DWonp4ruȯ 'HJ2 ^0G7wlଗQ?M\$b6U"QYZV* QwlK[TTvO;ʰd5*Cz# IojRz\TknJTa\ F-7aA+=_ t:A>;cG>^wɥ.ijoֿ n?]Rv$ͧANO@Ոޡ)AdFğ9FJu"Jb_Sgį3}нuh%ޙ>A9ӞLv:+8B W,0XhT V0bҚ~~X h klUDh=1B`1"Vc<oM2PZJΉ~(l`1c 6?22i/J,E 66Yێ.(zs14Hڑ. 2 ˑUt_(qGv& $6@RUڬ3i⚫nΩ`9vcVHq 1oG#2n(b`F[ &8Dx[n'OŜ>'p҂I[8GzҘo긪J-\ۜuTupT>ZD*&A7)K_-}9` U`ha)1H-h[R~5*֎z ɠvLFΣfjjPgB 2q883jW[Sqcn#r1Ub ֐UJ:"E#h<ȈN.bG 2o歵s]QTK79^7- 9GoNdpDnLH/) U̾ #p!3jU`lV3t5|ޮ;}BP1~h6mxF!Ȕ4ӄgSAUIX:­V,*%N pj.Q@؄ A2 c*5(K)`T%&zqyYΆIt~F.!9"D.$]WZ:Sy͏HJSxa[ pNܨ* jx24ZNBt9D C02FjɺLgh^(\a i.I48'Rrx,2BDaKҌ;$8"` (O XohpcIP@סp}OƬ̺ #-(u d%78U|(i2XpူpUZ?:=u4mV8SQ<p I9NYȔS6+=1vNR!Xh)/#V95%79"6 sûʉB1{uYLsdIBT/R:SU ,qu~vys&9\y^]M?; ?!Rd '/VLБEku;u0pop+!WW(#ŘDKx;Ѩ4ď7}_)Ww{KOL?bԽb,%BaxwTRο!i@l2@ ?x|bz(S<Yt,lt3,x {YEZ]vagM* ](ʦ:| |Ay-\SCn߇ߣe׋eNo0n;k|^֔α؆kdObW1l4!&b6)r f]O]F,a Z6R)q_.A%a:B]w&:jX .5o_&JlLʆs%XmHM ot?&[>5IzSgru &lWP< ;ٽ fzh\b%>ugaREtWǒC͏Cɾ+vED(:| p@Mazy/1 _ɿtM!P29sOGEݳ;:J2oP'J (^"swYv[F=iR;C3D6NA QR0q@,wQIFy"CXhE,Jlq Z^nEѯyÀvmL2bږɑ0) "a=3؈ݵ3;JH6ɖgc݉"Yx)RHJn&ITdQXn"NZG@ٙ|"rA @e2-˴i}Mi\:q x3A"xAZR;0yd+N dk0`7v1" ]VXJi)VΣ$zڨd{HO]<8F jוX> +l;iqt9"H I/QdX#FPHoYW9$:ց&;ʡT$KFGT g *UW1B^g=ӄ x$ )[X 1& D佖豉hj4\|"Mof(o%e6?4%Zێ^+Ah:_ lӋzu z4}ƭ)rhӔ.PBbl&s֖S Ā?.&1Š.ǟZUt;:]A߷gZв}ۦYƻ;/r>LF-͟nޖcnal.oaWѱ7"so:ƨ ՜޴m@,OۼfΎh~%9 \)c; ]g!wLDGq&ez3OaP~# Z6]dB\":tJd gBkǻ8mo,$&Gxq)"5(jl2$ -5šswAX UR}Lo~BtK)V4X(Ay9޶]7U!Ǿz`kTfMz%RL{Sw91ŒqmIQAu%5Vʏooz7n}7 1MFԟ +&4DA(54=(*ս75CoG_3TULԏr{7sxz<փF2&HoaUuВ61&DM}7 ` ,2ku|E~z3_Vg0"A`M@$n=9NҊIJAD rW 0%㮒b+%i>wwsW] `]JqW21KqWIZr+t+ϸ $_FqMo Jjbd6?>|RI9.`B[jxg?W,`fpK5REGnsrZ"/=qio13ሑf 5[̥BKr 0ŗ)I&)\;t V 裇a3[ Lx紐"GC%8!_DnڨhH|9cI(cv2,=Qm )aبK5nP Firpqo=e}ʠr-dv{Cmo~ t%69GQʹ4ӥ?{7ɇ1 'X`TDn# be}0z)#"b1h#Fr}iWk {%|_6^a|4M׷.u3 udzyObJi+MWڦ~O<у߳@ML×uvY ,Rg2DOLDX,-,xGI4 GrM;Y5f>SaE9mn}oiAE-6Kj_ePÇ >bigc%,sev s%,t]]ذ:xeZDO7 *vʱtP{masp^Ĭv>.ty𠮬ysr+G;j߫jsm;}ftzdL"GO[gy`Zy  &T`<oM2PZJΉzu4sbqZH>,zӫC2 RfjIu-Ys֬G_D ;7i!Ύpz].8&0q a!P L(HFm@Lt.]L3Y!b7$Fd Py9LpHoZ$.x7d8-R׆^=|z$Wx j]ࣃ1.P)!RJ%i]P% w%a !{{ wE􇮾|Lx `tJFo߷KXQ='ηQlWsr4 AHRVa S띱Vc&1nȈiDk45[!-ʡl`v1AZqCE*P} Vdֆ \PJcu24F}aːia?{ƍ_aSR4Ud7u\j{!qTTIl+[߯13HI|J5Z‘w'˿br"'\y^kfBۮYOt\ MrqIxh.QgE:Óm ^눽YI+\yZiK|;LuP;3MҘ$Ё/ʃ3rf=g!3&b-gTF̠K \Iu*.Et:E8QLLk89j)Ixag-/BUh*.%U暇uZִ*i'דݿ̀ʝ>M.rUs|EbJ=w*K/!wĂc U\%ֽl Kچ(x/@ J%Q u&uIZC,hӵ{+psl?9r_vgر-6sms^]nH${VJ EX&$*𜻄k2joQ(xhY0.x\L!EHxML,(HR$8# Tv][3pZ.q_8bg-GsDsĞ#x$Yb ]qfQGsiq)U*‚s9N951%Ɖ۬4'5Q\ Ȥ 4i!58e>Ȏ9bgKBt%=/k2ߞݷX8iVs6pFw٢QT)Z1G>Qd/o0EB(ijϪ |hșcRU%t#"]1)G2~L FjqШ9z?;:XԹZC]ޔtz&mW;1*V)u譑0Bc*VZg'S>ZsE6N76H3bh5>jIfk3p6% EmWPZADJ˵_Qe>Tx\r, 3+|gSnQ);Eʤb&)ΙI4@Qs HB/u WuN+ 4 *Ad/FzF6$dc%k4("9" >>u/Rz4 S+<5n̪+=#^gt]JLO 藴=? Oj,%ڍH9*R.E䥠- `ڼLi&]2< JR]^Ggu*On\MN k͘Xw^ .po~w嬛P19|3}:3G{v 7v~;9l찛.ޑZJKsxs>yAYpyh %`E- l@su2:ޥ_JF<ݝ c|`fFޅ_B;7k~] ϹZ:R2[ U@, u 9_@lH9=xM|޵4Ok]إkfkJÏKf ?$wy5N__2j -ǖ\JƅfӰr5 A{L=9M!l;)4=봳ϯ㷴a%*yk-C]̿]H꼚JI_o84lՆFb'ys6&'mC1Lw}H726+^fx$M,te]=<_Y3Q&M5ԩP}z;[&K/H0$x$!Ȍ ڊo:nbIO[9'gRqbOCNȤјhR]&.@WNs6lX~Uhu]BkK&@ 37 ]9h! DOe^1&н3g$]^]MffW8eD|D|[֥u [O[.dȀ}čI#YKbyvdisiGm`eKI=4WaNW\wGo, K}m"6tIP%%5ZHF ѡV0!Jxoh:g2C42,icbWoywwvnSCzb\fy^-<ߗ{ͷE'B-11tY}Vt/ Y+BU\ f1)u/}U"";u2^{ަ[J eKZO$i!,ߙp(8߁2J ]Fn*TgBӿ [}ms\ʛlbŸ b{tiR*eba1[QȵwNEb" qAJPˌQ@d' ]7 Gў6+h~Ta }֒Ō97nK +\L?Go&I  :EЙg[YI*˔t=vBOv5)wIm!k=3xV Rzs|7Oߐcn܄W턯|QGR},J 1 OT~V%9jl{eO_䗫L[Qw0_?``4z}6ys$|5NWGߟ|0h-LJ@i L%E *U | k:/pfߓֲƒx5u&}P.&>7ҳJ4cM=fgbHn yCnu埾`zPӧ&ԟsy5N{L/|Wj xTYë53PK\?z lދ9«GBs8>Ft0$V),vi;ce+v{_ ZN LJ!͕ 1L5/T[n[oM)z༷\Ef1Z9]΂ձ3p6TV6{"ٗ"{>N%fL›O] W6TnTZ[1u$0! ht2jFs8Y3l̚g~nȊFN[n>33ύ\h|ӧM7k7U@k&xݥGϭz]mOnj[w1tgkm355~ywZsML|wDauZ\-=LΈ^&JZv/ Xy.d% N DZWw=WU$ASMF1pޡ>7 T( KQ+sLnlvF&ts"r 6p 謏I>0tʢ`)*]! Jz˗IAW)賺!ͭVιǫXtŦR!bjU2 _Y~I"(<Eђ23B K,K27.,(6z.]b~>2}ΥP ׁ||ROUXdw_֙V{nP%Ü($dΌFMb dA9x Kyrƹw Q-( <99bC\Uyu{2,rFQ)Nʁ51)~7Ԕij|u/xvhiBڏgQ\RZiedܙ [C7C-t06頂wf6 rSru-%Dz0./@9fTugJ{H_!S>  ,0vfv6Kb"el7xERT],ˬdVebIxYP'hs <%5-Ysov#HђSS\4J 4d3ܳ+bԐҼL gCN"]$KdN x"Q}RU4D)AR2reFmN4 ^sD]Ej~9\IϤܔ~[ l0w-zg&LJ{2\!֢m9Uo)NW>ӛ}w|0c|#y"C7nu]Q˂???}!^m׫s$7j߻'uRA߾]8!4a\JƯ+o[DA꟟# ~A-*oTu4~Kz!߭{55 _քu?/w&Vy bV@-y8z|z0- pllA|HKQ>>fBy^.g$e4[tӇq] a00jՓTwju_g22Hu{217ߪ tbcz'9InּԵ׏_{ao?)U/pP^1 %x}=.-si=Ի|-+03|dT-3z4K](c9'6=CZhZ7og;\J[*Xo&| [^c|/>XǞ:JGɤW: Tށ:l9 ǐ& H@k~it&_[?)9!$IGC.:-YBFcI}ȉHB QGNO}VK~QK<7Wb.[.1Rw!~HsyAixw7Xv>lF:Ěs!t.@u5h"n?ҚZ)CTA ͡;D)Q)FXBShϽZ_!EMhn>KD~0b/_z7q؛bqYdLozo z4M}57G ۾жQw^_p^ԭ`V X^+9^:!KM ϋĕ_PܹH/JT^_{陘5W 7TC( }?wyu_em8gqM78UX`ZH3!tN+p*Hozb^Lpc(D`,P߼6(bzQJGu qLm6wUE? ÌK2nܰv8L8Q+ѐʤ\m<ɵb]L!"qi-Z:{!/zQ\:B@)U cLVmHV ɴjAw21Yj֜I-$gJD8*E g-z݆֜&G/-u~nH=\>q3t+?-|wvM{L!N`}@noC>QuT`:d3/AHR%SVFo/p({SD_m";Š%ܬzvL*%q"{-|«[qP(+y!vXi ͌{)X_K?7 wJr2H `i*yfSr6"qo(V'Kuzi!(Жk넥>PBI=#F(ɂ-֚`GztS4)TB)aSr wV|R |D6 y^<Ӕ*"4f<-CGO"M #2!lH{ : әכaF'dY d-x:U2u QB ZFܡp-Y!9#{wL8: GϮvϠkw~OsaPwx@Ksփ֏Wf96Bˠ9= ʨxbt 94jq4h4_vzfk g&i%LS+  1)>&$VODbBJ&fq9"RHV'SXV:KjD*š2@Uz BnLJ q4Z=*;5o8Y& b:Uqޅzt?-tK>pCZ?ۻ[6XBU-UZo 61s ,]`tQ^=pynhbK_}Rk=j^*]ƣіfOE0G 18@R,[r[\4gݟݗ7L}ZV,Xv-P>a{:d:n~&n$un^՜K׫rn<²;IGψpBH ǩ,8QsP HHp:X sc O H\P>Rg3,ܫ˱SH!2#5[ o$GJo=5ѪD2uAybPV2eZsvz]$#>.g-`5dXE6iuޏpEv0 US/ rK}E?\W+~-?^|J?`O+ 9O>)`xb07-0%oU+8?Bs'lǴ_/O/O/Oo$ׁQ @acHRDL̂s1#{"<:ڥ\ʜ6Ob IJ6bDRBHψZ$>14E֫ dkM'WK0^إ/&({\Vmbx֧}:ߒ;;l, McjgbjRr>@}+@@kwӀ _P0S!p(,J| 8pr`.dgx&wזP:56Ku3Z1G1(3T \a<(%i!TN rT*EQQ$b[F5gO_ { ~\7iXl?;M#j C"*d!tJ'Cn4a8˔UU Jω0@QtNZ]20AG:$ۮ{KerƁUR(D.D=&)6$GZ3&J<՜g}z S٫կ&mMfNQ5YYBe vVh!Qs$$"ÂI@;NQydg0u^pnfO>_B}^}ӛzc7/ A>_򉣑q9_˺@˺K|u94L|}=iua2+"K] mќ,vo,jTmD{UӛNo.RoB9߸w f<˟'έ\]CɬZXbpsf&zYN37hfbO1טu zuufJ58,nL "ƶ]z֖6l ٿ-mqYBOunUn[^xHgw[4PN_h߸M?YFMPh'7{o6Gjò1}vZMV]S6OM;S3[a9ZЇ!qOf7u%)PB/~Z0ǥ^R\Xd$VKQxch.v"ڰ0gm]R2uyБ`PpXb#@wQ ygmPɨ`%"8MTP \o8skM-"?bo0 ~=aon%y~\.uMS_ͿNgSU9{ayg顔}p6aqp,p6%(VMԎ(8KhtTmB*'bo\ #yiXkDLB‚1D.(R5R/RTEtRP0Rb!HC(Q:V>1MZd#A37gm4d[HgٍNCSBO4G8 wLwn:چ#! 'U{ThA S c0AB@ !)) P x/҈qb .K(se$(9(1Nh\; d8)rSC??>. n7QɃVG11eܶʜ['5PvUΒIa:p\s^f\a!dD:;Թ뢱?&p(-k\g8^nrܪ?^бsub6.xpR!"e)Dy J$BmWmf>? _ HCI&r!gȍ"^Q| `bq. Y'%GҬ_%[ҌicȪ_@":̈TRhfBx/yHAڞtpA`QZV56UYoF{f% Ztnw~P&x>ӗSqn&':q,f_zWdw}xL.XU߽ߩwRE{C/%9[ST]=E'ޓ2QcU삳Cg!YKHRZgg 1h+9DME碥bT~ T})e+RA526ndR$c_,tXxX(E݋Idz#7veVAf.c\ .aCE2(Y}UlT?\< Y KBv*dW> ؒ|sVٍ8k婠v3}cvj v9CE=*ee>{G(e$c 0+&xXu17Bff(1d*bZe82Fs1Y$6Yq[׿j֝x@ c_D 8 67l rJAL$HZ1DvY%QB$⤰VM:ƒCSD ^ amLx2$J#dSbKZIH 4Ffٍ r^g3)lF 8IC.;-sEv}Ytw$>P*KUٷu0.}LSSrvv ~ղɛW?;. 9_q:kq~'3zZw{Zo۵+ziK*-X/Yg)#hY:Υ띥^-]^nXo.UF]@ HDv)vXnb׮2[Zc [KhyՕ{]?`=Q뫣%g| g?q.z1 Zx_7bi}^.i6qy6ZQ}(u;?-rk֘af'}WLlDFd((r`-@`klGoK;c6"gc(HICCW((tJA^f}_i_e >O(xR;f+>+ _1[vۤ6ABPNi*NŌ.@!_rAH:L}dkB挳 MH(BlV[ytE$ @-Ѳ|,L]Ff^xxl*Ъt·XIZX^%6Pa X}ܛjp  %QF=qϢ IHRfqN)lr^h!լ;5^l:] W^~ i<oS,7ey$҃צey|2 H;d( ї($5lv99;)y$&qtP(ނV9"[DSxi5rpZ:cmq%$8Jހ*} _N~z>: Du{×J^{l)& ; h"RSv7G ",x 2d(`D5Ŧ,3ʢs4r%)/B* mq|UJ' 2в +?Lߚ8XdR!e^{=f O9xcK,D>*P y<3 ׆  ԣHQkwwD' ^x-B7aܒSH'u LPw5 DdԈ M~V0~V^N12nq \y|*>4GZ-}h`v7 n;7wf}ӵlZ5&%ꌱ` A}!_HuE3ХZPP jy(=:g)K4}& K$1FIfjU!eHJBgmy\&sdoX7>tv{6FW7sH~}~Oт3}74rs۫_͟'&ϓ dU )g;_~tU<>TKFŖ#ڵeOZJJu$J%p)hojIJnтKinj t!z!ء#jRORF-- ^\0KSR&!iEzk$,M1Rq=sƨܶN(Α 9BO@twqq 9R8&xu\h˟?#ݹ8HHe۳<'e:AܸJ%T|6ER!Ӽ"T(АvD{h1hs+%HY`U$1cJd 1 :'PMFB%h+ ]믿~Tpb$w@1D1_Nɺa/d]/d6q%j/J}%뤩% mk~@iic"YaPGΨ0Ƀ/ֺ_z^Fܪi0׈[HM:=ϝE"k£(#H9E HR=]d Ơc0h*ۢU_DSd"xrœ&@":̈TRhfBx/yHA6/%pA`QZV56UYoF{ҢlVQz7x YZӗSjh&'F5j,f_zϼd>}}GL.zKft}-wdw#.k?wKw{R&j ]puD,${)I@*B!w%(\T =|rK([iul ug72ng)Gf/Xh,ӱ/r 8O?狯%ǘ9W[leDQ1%!4gcd 0T/f%4Ҫ¦.Y@ ٩]asOP`K"󭏑[ug7btͤc_Q =0حn51`N^*`J6{1ń8.6HV*đ1߈"Q͊ۺE"n-&)G$DANdUB%Db+N kڤc,mnĊ'CNTa;2y KV:M>%80AcDl݈֝x뼰*)u6=qqU.1​' YNj>ЗO= /͟W5xUƅ_VOeC [Vs7?o9!@`yod5ʌEk ~3~~s୺j^{&L޲YrO_'jo7>O׼ճW;KiRGв]=Z׶zsW~Q|o}7.Ḳt cݹO֥Kxj<:qCtpm(`uP6ckm#ɮpqo=vf;dAQO`YPǞ =MM,& [2Y}>ΩWf9<W ۚԻCQu> }_JB _\̖uѓeՙsǀ ̛ѭ/[ER1ɬc7 +bz O>Ng<:fi,Rj,b-͡ϭ?_[˓n0^~_ss^^ODBLm|z{ˌDZz{q?_;+{9@\.z]7?q]sOCwֱܖE/\+Ƥ`g* WZiL۫:ĎPĻPvJ+])ntm$ۀgߘcZ[8u8󳓲"3 n ;Ǒ}:1'PƵP;Iw]軪mUwHuU[o.=b]FJ95 u&YtrM*Ύ0PF5+(7^f<&]ixևſߵ1})8 z)-"=^?@໋0!.݇tDUtZ!D·\;%5'Edwi'&ס.m FPQi^8HKI!kb VL~#wU9֡4fnZ-L.έ:z&LJi)ՓsrT7Gi J9-Ud1!Zke8PML%'j &w=멠1.tDh[:>{#s^luIots)"D.RvlkdeĢj+ Nbz'۽7RH[X=8/U^Fo6y$~ؓ{d3W軡#O~FYϺ# ]gR"#~f[Q]r;Q].$c AgܳJf/4 Hk p&}҉-C$K" mKC˩LYKqNL(Vq(YɉdlQdؓ8}Ӥ?(0TBW vAPSlHTB`}U=]O!RR(VoGD,]*IYMV\[%#? YJ}fBy1[R;Ut6^jrP*9T҆y7_Rʝte?m ($!O}JpzbQ5 N B QQ`c{dcC:G?O m@[Cˁˁl/ٿylӜMuV1@] ƙbtUE64<̂j~o4wa4i4iFmKÓ9֐I!EXh#6Dee!&d[-VQ9ᢓISTUmfj ^WD&gKT9ԆأW{S|(}Ufhz7=]E+!o,)=ޅ"u-MV߸ 0 pwvw1w4|ĝ\#۾ۆn%N-y>nioyvvmt~Bshs(oٴ7[i|>k8ˊh >m~p:Qr^'Qߟ~vy2^RZVˬ\.zLp?eKY-bߞ>;ZgQXBpm^g.vu G7gƏ+?XNlq ߹/0'墰Ǭe@ޛKWqr_~oogNسV>БWokV9%|.JE]Ԓ;- i kh|M6I~M[StY /sNWbXc6"hNZ&kĖ}SE9vA aࢄYrIMQiL8$ g%J6N]S}*q^S>'WySYύDkoֹ-m-R) 3(ί?7V=6fq.fkJyzBkO^7qY<zݤ5vuҚ5)ښ]ؚ/`Pnr4[p͋.'LL#J<.f&QÝ3~2GB0jD;|N=!7 |:nKTt^ݴˢy8Ê(p@-u/,@gge~hfCuw'*m tHq5IU{`R ^ͅ7úRR3hWj`k~3-cWUQi;o\+$&a'znww_ԢO6|2O6i6) &OiKmj}*Iб R*[pW7p1҉;|74-m o~?>9i='f[k~oP6;0?p-7/>|{-b;F x{Uݯg RaqPLm?8 Qu jGcWap$WR>0ף2b8piϋM<[ey:#yb-7#DdG :w} \u)hݒ'i΁eVQhdI2h-WI_nirGϑ5 -K5&/ ?ViS\>,ť`T 2 e5Sq5ZMVDXl6Md7l ;^+]֫"IW>Z!!i-BmcV)kU5n̒ل0wҍ dIb`?^ҒwR(GfJi8QZ k\ENgo^#T+sʱ,\ dX\$"D6 Pt0X2 Y9ѭDw$%J8_S AiC;4ZBɓQ_lO(y2[(Q ZmpmY7)w&B%c*xߦw`s1L4uȵ:[65$0JRz8'@ =Dc{iG6ߔw'!]κJӚp>B#|2Eu ږ"fU-f!R-$1ۢjD5Nz29QM\1⥄oEla_!R|0!50&de=O)<7 {FIb)iG"d;)@JTa+팬j}nj6Z-&Qa `ґBV= ^m..9ʊa I%E Dcc:& rJaF'XbkV@Ql`ZS̡MG28+BIRX2AWDPaN}eTU&1aa=d4ą%XCaWΖTϘcrTL_% V`5i`4DÜk1؆s(Z%h@jGQlnEr(-BPA4&,څSN)l8oAVǩrBJ % JwlP`@J+o9 J .~Ja(ùQX"z]|%lxD P ˕3q2M!̣\ RX fEsp4!Dq޹TUoQY8<@Qaa#fR0v(a)@gL,Q hAN*"UV4R2Ah*3@qp0seǣ}!IGRP6Z'oAAO)ZgW&"[+ܨb $)beIلժFD0 ˋE/S*HX-d jJa`VƷnp +\1$Vc+:c"&Lg,;` ҮF1B/EUN1+Rv0')Am8@*- ] &W4ZʍUzt AL²I c$bDL˄ȫ jrQQ)Pd1GS@^\'6X _Tɠթ <oĭ |t`uvR$ 8U(ɩ⡫;m=ɜDPqa0 vM }^Ls@ MހwK}Y]PP5u&",C)9ip NA3q.z:\dshs'Ō%]H ΃t:PDsm_iHuȇE;0 &ɗYZLhR){}IJ̋,,:dtuթTsM@GhYcvmNW'x=" s.CxhgQ0 ÉTICXsBM6CڵfQi,F%k@dB޼j6p*gUnKoEUBZzk[ 38]$/q7$o0!Kmh:X Fd(\qøLgݶg -B骅 ÛIhd*ޢ4 0'k0hky.j ǐUFCm b|3 =7*3zc,h+vJr).F*xT*|( wBA)WZ076Iv24 # [1 "\x>W' W@N;!$ <2a̘B!D b #EiQ-IUk1Q:H՞[r=B&mX)+Gϊ14hci =72_HU gF((mͤX LG4ՔqOp2׬|9(Ws΋6`jh o:k /5s%j5"m@~c];XpnYɌBQ3`|em]aG{c|Dh8w=jMfG!lPb.7ܭ_UD{"`~V;M6rIWŪR盉`h.ń12 7+&)""~ׅKPq9Q&(?VW$Z;g:^m#:ڋV>/l1\^bhqC;hYu*7/lHY 0 8< 7X||hܛҦ8Ԯj}N$i!`6,`-dݓ'`$E@Nw"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""%ም$P~>6s|6$UOF I &"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""@~gD` tφ Ԭ⩓@JMgHH2@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D\Hb4ωo@->H©'OHHi#H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""HC}^[lדWV~/ܩX?I^.`0ͫ<p %X) K/\"S)Ij^(~/ߜ\U`&Ӓ_ dcTqhBkY@G hl[HԈ=4s,9!BB)TUf",kݧ 7htk?}g꟭_Օz{1Ecy1tKVmc18#M7W1ܮz}'x _>,>wާ-y(f倱I6H4YO૓pBV蔯Vvx} |Zkov_.o~~yHGyn6luYs\ eo= ʄtV_ɵY }-sD7+3'g#{V:,烚÷e(f$_JluPCj;'Xbو,:_p0yt?jNs,?pAH?K>N+t}_e qx~>8J|vemi4/5\AYa`Gbf!@dU!FH+Jw#/)?^+ciᄑ5mb^h.[ohjf 6~6-)eWwz1A]_}ѿ/6~7^-޶~ H|Xe-:ΎKW+_}3,Pߧjgӏ*b}5_,ʇMx_6_/J]l ڶI*!K0X+`xijx x A{p/bcP]mCXʬǼW9WuKR~|^'LN81Zkm.ꐔk+U2]{p\{p7s: .vmZm6=y57k.ל_jUj&g~iXlWvx,ĭީ=18w iGk'T1[4aЌAɢ,烫]8R?8R"dyzr1uof1ǓM'B8ݗjx|?9TY6NtE%"K`ێ6uxhi/mIN+Q)~z6wߛ1xS`)w8>}2'i14y꫁~gg"r7\y'x<ƒx֖͘9Gp Jd(>JtIWF!ˎՐr($J+'ctcfqЄ0;x߼}5|ۦfx_ߴ;YK=}phbQ$zGvx@l FvK=xB__ y,p7nc!&s[.gR2Uu6 J"HdIȘ4o{Y2ɑo,>+͟gZ3?͛%f9UJsI2C;l6=u=@n):`z*}(6[2ppt3h]MjPd6 Z:޼/#C|λbX5/4|8pXp'bpK4#z/җw՗ u6M5XUH-dYFؖ=hNHZS2V|RU}Z e6-_޺fɽ /<<1M.Ϝtɺ 7誕og>bAJ6rDw5[ɸ Qגz^Z`oEY^]7߲¦k\[o@ނڽbߒwO?$Q>y_aRc>F}z${o%̓f*&iIy*n6;հTfl>qħ_KqfÃ<4CM8yܧikKK]k7ϧe+w64=9|N(Fp듓F%~9{wljk>mG^׾1a_lJ~ /"vnAmIՋ:rkӭRT#Zt>xOh o=h UbzŅ!Ǝb%J-?woRy0 k;ra JCW3G5d]-B]'L<9n&d~B+Ly,V3ę9Sn0ʣ`,[)##֑_Q-)*ii MNFs\7*M9z桿\MP%w3nY$#IL/1?({OHR/SujEǮ奰|:yb[J(*sq_J˘Ss mz˚y1xď| okܝMf.{Y5#{sN:!XuEk`cT/DFUJƜ8T*yRTkh(CE'YZ\X2@VG(|b:9Ǡ~7Tq#OIݎW*]sM]֗~H\. ZѤCJ~jk )RP5)gb`0=@w@?x!Jy+OwEV6c #(qD^}{? w=f=>vG:wr)*J9$ˀP)MU20D뒉:ou)Q` )q!JehTQo dp'I34qͰo9[zb."#E,Vֱ#%Kݸt~2@acZz.8C-I@Lad"79b")$5Gxg{$^ U_΋5_ڴYj^Ze0u@Ku >MZB; +5C㞦Iqº 3sLrRt%xEv!Xu'JNqRn`$Kc-M9ȠY$?ӪbaZKP  j 6LdyFj H n.=}C:OY·gz[)3cSM_]߫ ]Zn֢u/d0LGZeLd-ġ-鬔W DfiE]^Ӥ}Vo7`׊}?͏gWnS,V&9Kn!]r|qSlrm$=k:y|Ծfu_n8怿:& mRFqCT8 !$gxdpͼ!y"EcѦ71$ʆwy&ʯjQ5,;8-I'ʥ˹"\uWI1U*W\ * ("d,_G)NzE {' 8Ȥh$%*2sxD23%Z/52P ;9+(t]{MZ{f?eл>7XAf_?b,Ys-Pi>}7@˅ˬR]991 *xH˧<:uPZdzm<=֫퀴DZrK%3 w{CZk֫ᅤVʠ$,r[eL[rJYuNBcq%rL:4ʦT8!V1YQ)cBdLP)Z*GȰJ^["a?me܉nh#T=mh>l0H#l4XxQl`bk##hL ttR`K 2`!!:}I hR=UlXK W{:Nu娸]f]ﺿVWDn$m>穆?k2mznω sxv_huE/i8';:]|Y>` SLh bz djoc/T`C "xHJyk+d}H.;6v6^K+4DrJc^ZmV6dI&3@`PmoBLPBbhw{{VulԦYW>6lr׹8PHYs")8praߏZsǕ\\#ҙC](\pI P*y"#]V\gR#KpZ1TeOvʼ b|̠Wd!2䊬(!]SjQ_kY${#n}XE1F?vՈgh8hMd1[!%'`=Jg1(+ȊL뒢^z{N,gLEq".kLT"=Cvd*餹HPFzU!j7r.TҋE"cKvԋՋ8ōIι =XN>we2DRT9cɐ zqzXaoܱ>=sAI?^wøC6{LRzz6w>VZ)Uu10#T1*$*%V΀bzC.eQv15(4X_ >)8-]C.BhkKs|Zzv^ͿKMczH(|.]Y?T#4ȵg6tM}37O&P~?WǼ8i/dLLVѵ| RRqI& tpup(ZD]YJUxO{T*IU*W*2W \쀩LXO]IOggSу2MJ)xD23%Z FfpqV'};U*L]޵q$c!9ⲹ`sw!13Mʤ$[93Ç$H٣iTwWUKe @fy^]{~,HUCA=2BxΫ/щŅbOۅˬR]991L*xwzq 6]__ F3n=:ҒVK7umQN~Z]N;[)+mKL[r{gy?lr田IAF^AٔuN1H.fg Sq]!FeRfb(+w)(B ̌R RdTC׉:# 6҉ɑ{hvJj~cǴq,FM둫|௙<]%9[捍.DdHc1 s5 ~ܹD^s5rؼ+!{lc lmdLVN9F Y&,k33:Y-#L.G2KpL T&N{bV29Pjc:R H1@U S!pINAppnRM"F ͣ X3T)\ƩSOM'q <5&Yd罕AC2N"S)0:\a8[SmM%)pNai)/W$H(͛ȴa !J u+HK^ '=1D6Mp'ӏErf&t$sdVEC4J*iͭ圖l}ty]ǫ~%^JV^XP'Jyߛ?M08hKZj$}柬===9xvVmřU0 eةՇtu?$b❿_yQuF*ڷil6q2}Ko"M_ 371%n%.`/~߯Z7 dԼ^ixխ>ڵjlӱf3nV#%m-fa^U֌nitA{v[mPR-u)5 ep][nޯ@ 1}y(7cpYuvX >>bb4ۈ&lLԷlnM؀K= Xru$}؍kcJK2! /iD/F9FmF&׿Ӵݢ}gLMMaSr,H˦ﰼr>[hj0~;=}CoXV[[D\ÖzCjh >8ƚK-h}9MŸ&Sdj+ms?"&$gAM9~}q ﻬ ;Ib`"VB0N|a?wrS}%3N%8\3蓇,mʊq2&T6#RDq^QU-_kW[\'i .A6fA@Rit.$Ma,_Mmx⁹ Eǹ[tGjG_ĸ=Gvc%l 7|?qcKj;X1B!s.`\2pထZv {XjYa edlBxG@ο5hk\ ݁rp2oC  ^4H fW/^iɟqE(M'ӫG,$;?O~>Bev1M#W=GVv~GqT$䢦L-Z 9 e/NO?5+IMωOXFzNBۦםy6S?I^L˟!lZ7n9GhAi:?4)b;QXi]4%`r@mon:Z'/č+ڍ|49<|-SӋI[v*Q'ledJUBmu2]9i˃:;4CG:V:`&a62f*yϼЉc:WlR”4daW\{ǕJ\d39fNIVHQ22ϭ2 >Cޒq.vBɠet`0Qv XZ掍ȹHo1a vH+qfvؔ$O S/CSz;L`|ʫ ǫ)*̣NgEVfh^ҴR1FӕR|ei1uaSׅ' EA(ꌨBf\Gp%#Y )3 P20EƹL X;B! !iˎYg܍njcj0qH[^͈3vnYhٍ=䛆?rw;H7{s g0 ihaD4 B]CLRiT8N\a}}]$XcR2Cڌ{'zwjG1_׸w,>bjEĬ=*2͏H2 h @RȈ`RM+hnS[gEN;tfk{oM͑pi4Y:%hԑLN%}YH()l{zPxe^1AyT!y4$u<T6Z1詑)PV2pSʒIEE,V81R Pyn<buFg$5z @9|,匐|x[1cn}ow_LQ־`ˠƗ3J0>'V<4f̉G.qV{ Qg܆,9%A<O7g2={ճQk4ּs: .CCwu/PԵe7w8;.x.A|+W:Y>8Pp4ʙ1P &&$#!}ΥIftJ8#_rvJAu205 bmIbu1ި}=MGp WKuui<9l ~. LR(nlNnΨMMxK߬a6 _#1eݼ{N`p"_=|(lgnồ.z'_Ya*`X-Ls<H!5K.yr.GINp5wّso@Fp<\,33H2z2(Wl= 4%rKZH=SP$a:QzC~W#҆pvOKa&%8tk}3}S\ qg|놡T.vݾ:}.˴Ϩ>X!XgSRj{}00IrH]I g J<uE .]]*eEun)+pʼfphY4Y#f &?ٻFn$S۞008,..pI0Fr$yp%Y~ȒeʖfA]$_ů 'L矾`2u&M84\48ցuI_{vkauыٰm(k}0,Gd(&uOs9p-/t<u⣦ӕBP*G9M#l; a$ѻι=sߵj{2~8;?4 ]sLͧ⸕Go߾ʂ rkd"D%AV ceA$zV$Q5QFҩe|(v"pqWE\]i 䫅{wŸ+C#rW$0hUWcqWEZp tW ܕeYr.ʣ!+҂8twUԲwW_|t^X'7VA:ҥM~X:USN"S$ 2x5?烯듹-BҕWSS?l󻑚6VpaT,TA &` ?ʀH.H~}5y ]<])fCwJ+0Bpkʉ6d^*gdYK.NAo*UY[ItOG,N,WicLI]#6ګTTlr>Xy]2cM(%YjDktB>܉݁t^z4%kK8Jl k y ܲobȃZ#S!Y2y[ %l)9$>ذ6w]ɪ\U ÷GԂl{ ft_ٍ9az0mwo`nl~}/GkRcO -={\}'Y=1Ѕŝ٭h;rX9vJTn6[FjL}$ZP `UoЀ *bttVKICI8om&MЅS| g+^K+t!qȈ&6sS J9g(Ag e\|P#kAgg=@~XnBՃ7N=:z!t2::Jv@ȱIV@hHM%k\ؘΓJłbZ,8DUІ0\1&93/<0 .Xjk*UѡBJQ:ObdPf%Ye ! yZhd}8!y憇L26e﹡??}}g8Njλࣇ/1V&~[=2Q'CC u$K}Ԝ zD YXf1eHчRE.?&s9x'Aʨ8[{e W)/3g E!e ӄf'D-I%#SI*gn;T8A~"pM]DVo < d$FHR c7jPtҮ,%F~rաUӓ]y@xFU=r+2ߤ6uN|#7RՍ7~;P~ _o//68N/poE씔߹1)Auu_]kɼ\^-c 7/y|줲Z\׫q`+fݍGRҏwŶH_oQ_j]F]4/I4K܇ՕmVI{$Kw+X ;ܵ2ݝgif u[ߞ7k\?Ƭ>ouCB06Y:\;fRGtQNW4gݼVm]lm>vep F\7{ Nˆ@b8,0jגݲl_?sTw[ޏv˾uY'1k1%oJot;-̟><G_NN~άy|6J&nYz/1_ Jmp/fi|9؟ b#tMk%`W&ǒ8K!]xon}4҃Shhocցsp6Ԫ܀v;n[UVAP` x[T2Ĭ_tR,?He!ԌqI!K0@",fŸ٣d*WyFoZS5t:BFBg QHVhtH.%L6҉~E,wjt{i/* 0*;!.))G,BP&zpiO+$AFUd*7]\ & ((D0h{ϣrZ:Ut =jeY$e:ɤ7p%l:g*dc\t ȜjOfV>Kn|g?'=:ĈGFrqP)λmR3|^)́@jr8kHN8/ M.FJ"t|ʍ`Ps>xwS:zW]}m ֨]_ֹp89*֫m^iiJW7~|{0m&Hj0:H=+WlNB'qe⚙t4HE,[^#@v$,l=r]1%e0S L1RQ /|fn)J>֮MXM{'+I멠ѧ1]h[b霻T5\cwnt=SOxFvY9 )XM.&Hc}: yv%.R~V7*;GodO84oM" 3x7P|7/ml:?͚+0@6 LhagaD߲ʶJ4?;:=kϸ'O#2" ɝupHH'} U|e,Ȍ4{RIr<&rBԝYA$ _5q(5>Ձ:/P@A>L 9Yq@g@cL)dU].-p,E]pilv刈F y֥(Z l劕,^z[37 [,̃d6X0% 4!ZaM>;gHrgIWxƘsgΡ1; e g!XCN*L&P&UO|~b<:tbPBˁ$+f^#!O9`D(GȰAF-@'4gAs^`4wo 4UͧiF'u6t1Zj&-S%!$.3V)eK`a ;!EŘ ,%g29qw;\"RM-Q弔mCPr[}f*O=tVb{ȕW v:Q Xl2bgff,9k}-e[-ŝ HVEֱ r-i%.nJ]땞F2O=}.ǟ[Ut't:oH>[?eزRwnzhxu zn\hM뭷Ř70mf.T_}`uMrxƨs֜ tSxmҋ|+׮' ݷnWZnmަmִP}T_ʵîM F:"|F :SQp"s+3d#7eu^G:p֚'f. }NqΔ4 I P vL$kaHEOu42QǸָDphiJ#AvyH1r7}xdk<^VC +AzIaj:La2͛ ë*e"Xe#-Uvy;*Pv( ]块ewۃEϼ7ӼO ;@5 K;(aY0X grG¿BsO/:_[c- m&YjdqciK[y}w>E(4 _0 B:t y>{.ZG Xa"m! <:b挒E,s ZE Ae=0"!JTmAL&ʌdXH"DL5r6 ; |<>WZ<?Dv9m' -;PV-4<ϡJ74i8&&#<'2͘ EZvh*BUH] 䴳FFC)hr$5EdhR /EJضE 8;U_g'+_O:mIm]YdYJjx>kOÆ<[F'eɤK[BB9%ԡّ/Ԏ]u ,bq[ ڎ~ڷYI:H8?,򘳌8UBʤ: . xRg7n.7ko&[ഺO[oA\KwqO_n6=+3vtvH I00*8Pe@K)1m<ecsL2/l)N1P.d(n4ADUA6Iܺ嬑:?g&k9KƸb6~A6yg&l&uoRI34aY[N`G2m_/c+_ճ&A!_Ɖ 0 *H\U ^n2JICcRT|U~.Ϲ'_>kNlSorտ|W3Ok!N5A{<{޼2A +NBa£`E4 kP"vrIP&s FƆT`gjw]$\AHTp#L~HXrŸ@SWHSlK]er=uUG-ԝzJq|ǯ`AɨL.=[L-?zt-KTWo4]egeewcǓ̖$UNP~fᔏvlS6IW~v?׽gph[Uݜ ֧%3d$R+nTN7Lݬ9]x\D/ xfʿU~_ۏSYCyO7Qz7McrPaLoK[}_G$TO-RzWWb5zǹ;;3 ATTZz)ΩnvZpmozwUM,9gܡ&uIi]sW0QjqH:k֔z.hCL$]T$с e0Q"Nb4WK.2R fן_(A(he|)n̈́X9\jAUK_ëpEdJ7YlQZn~T8[ʜ]eY#UmzB$'D?OOo] `+&B 3[HJr~lWnUf|_,[$x$9U4z1&.Q&$D`%l'qNrL"ZLi3;+E2 :ń]}OqJ Ntˍf r LM&" "HH״#5z;&egzGS[[heQf .7ξy&F m˾^fS,K}0Kh!XFZBV5! ^3zw.w'qnF}`Nj3lwQe*ÄNZj' NDm3ጆPLym;n)"B#~TH#‰ଶq."LEtA!Za%9e{{ Zqhgg '}>?z뙳ۊeY>^wF&HBrƓ O Y"-ҪHiA $7 ?!$7x?(v%v:X(PR_ J3^%)⒤ oTDD%Y ͣ n[y, )h-?129I8EiQW KK9=|yT68B#3 L9ۚjkDIc@wE h 炋ڜ D*6jRB]A%vkk<„<!q 4_iH(G:QxȤYh oϫ2ee˕e>z~>n}rs`) _4bUhב 1(D|T(}ofԭJ=??^ߜ5?*(0Fץ7;Z*; TKzoy_Vhm;Vs1_ 3+U`X^ÂKyg/y 0k  ~>Gvd?؜*r7,Ee Z{X\&PF7l>:oqϻĭ08wݠ8 RP, e/{eO!~yCA83эjYuzXs>rZJ1[ uD, =aS6A9j aW9|u$5e]d:v%[ hػ7èHRd>_5*8B7oS2VcwՂlHo|?g[5kko4y~'L_:ve.eƞַbp>7vFi<ԃeX_R|׬/w)B76P69B^F*X26IR9鵶2X}{{ tRI~S& TY?:XuCuHĈ@+~\y0oHS\PuM*E'|2V ТA*l6X3N?T~Qc<vyOץOn=AC. nuui{@wmMLqhRˆZ6lƇ;My\,@!s,ayM7_=qt z ?Y{ΤM}ϯCkF_뚹<.=CDŽСXhm.kINjni(R*5&l{KEhžN0t\Fi\7%%\{'SghnOўў==E%;d1c4!qtJaJVƃ$UI8%) G&8 9I>p/OxK5DoQFyr$$gc6%16)Od^d d> h $Fq8 J+рzTuV@[GJ2y%CZp6d/R#^W~M)֫1Ky`y C>YKr ZJ'S9ZIh (232Rv?M q~m:?4nv} R;y ӫu:_5@/ [jPF8' Lpʏ'/p֯UPv @j%=:bUZ(1V) ĻHte̲ "򻩻 .75XJm&7Ye;B k,vjSQmq>xy5w}㉝/o_›'>*I3Gwי Srhh:PȠ 綘3d&DRW>*+1JU:KaLe^*a2'DHB13pm_C:WN/}؆ ={j ߴ |Mo"Tty^LNNz,5ʉbvHr&I6f_W:C&yɴI*pɗh'AJ.Z!t0Hr`P585Iq}wޭ~!BT)61vhYִ,@Rϸ^W,;wwuFKiьFS,20d4v7jGsN6[?|uԷElE̠3H21UVrl ҩ5ä%j NEtG(]~̦ؓk9OٶoA_}9iX5yYˊ.O6}7G.~ۧ׳õpeȚ4*<T\KTs΂\hA{5G#"\ #00(U"y`tlBjaZRX#R#WZ9ɴU[%;*2z2SBKT2u7P=34~rW d̉G.5X_J*CjѣCG P!Ej[vS<,6Kp!+ ^ket+øI1 - TΌi]bRI2Y\"&20^ > JW!=.*=ɿտNrq9)eKJo>,.X^"CX/+GqJMc0 쯓y><ۛ]hirf}k{lvHf٣{!pE8t?|(IsL7-a7fo`=t}s|3ץv)zdú>gΟ~~ԭ2}ߋۿhXtO]8~ ц|j]1[r_G~dt:*Nw{sCq)4s7Tqӕ-T5ݿ*z>]N,vvٮbN[W]v'N17EEꪫmS РljuFm0z,kj3Mu:o.=wAXEfqF7r[ .O?\_xĪgK 6Lp em-mA ZK!d K>TYy'lO̝}4Δ*s@|z'pt: 465& A =@SPzU@?4z!Uۍ67Ʋ@_\^sld q`#CV: #֧ #R%|'YhK7< NظyDd![cjLBuI0@Phl4p%9F93mIsThxDZ}lQzLSZ'M)!t4T\۳EjJ5&|t#,6@6iZׁ|H>FFS-HVouL+?X^} ٷM~l V4by2>Zb_/5ezvZiC^$d;y{Ӛ0I5~g@;?ZF^:EIL胲Z1aS5}HngmnX~ukwTmBK^ ?w^d[|{^\;|0z &eyPUT/l.w^- \w<[auj3|0k7jFT($8X""* l SH&2{!f}dE7˷qV ;M Kt =Xz!PISP2RU^J( P7KRzA8ܐdNbxH IIZ:AZBeE&aEMt$+}Ρ[xu{.ɱLmb%sLsr[]t9m{jϔW۩! **vݣP%AbQh5(}rVbiL-1122ZcFdN-.fAO .8Rt9+)n$pdXMXTj3X,ЌXxؒua=?:Uʑ<= ft;d s0b]1,h`bYpVK<ͤ2KSoݓY'a(D$46FE+|l;|'M)`]ZFl\㎉y.]mvڮ2jGSŢH=*%"!%.:"-@!iwx^O}ݔY .Y51CȊ2iȃ=m(6D;oK ޏۗaNC< m2&|LSZ\/Cj6(j˫Y Iy@lHQ[P6&uB!pQLQzv\J1&z3,o Yc#tF%&b[@a.TDۘ^>EJ&~ftL٢~5OԮ,71i[>^^|q3!rE>p=/Ιo4fסa~O-q%ͬ2ciwjm^TzbD YJGJ6s!:O^,/h*@súR\ & RP'+KNQ"EA/~N1rS 11FLC3̚"B6IZE}* )À<>ä=)cK.1k8V^5-VոAt7vR87bGJ?{Ʊd O 6cU?]}7l6!S֊&\3CDIi(6g8S]uTwulVPĕb_iEJ/0;HGpE[R \izpU4WCGpEs?쪈q_H+qHW[ʗ`\Po}`25FMSI}$+ yQ'u E`kI\{"H+v .R.M&=L V\-9aP`Fя?/gp<WQe-QZImi.A/emun4̈+&N;[fc9^B1;.]fm5+UǭdɌQ%&s%ʣUXB¹l\Z*ܣٖnkl[}RQm]mIJVζJGpUY)*/pU$%p"J+{CFӟ".¾UvwȊKWFjWiv^Gd?dZԒ.wWwoxxr 7 PBXIru&;`r\R|6iruA%n016j/xa?鿾=j-q] jCo6tn< !}@I]2t卛9}ZEiRȔ/N'whQ9*s *%iZ9ex %dewh. ܀npbxkNS^E`?A&Hw=(R>EηJ#y(?`0#P?`sy?`0{vԲ#k S8B2R'U:́`8*#T6Ti@gH\O^f6oިǶTy#Y +MG1&AF"xK脰3QIgyq2O趎"B :##2+,2C`YcL{Mq4̻Օ8CBjwaw0iUxzO|J>qmRp$=|iZ:MdKbMzΠɲe侲^8%5h#g/u"5։bwXFt:-#ePdTA=/%:`E&0*zΝINY.uRKPnxlJK&| {ˤLgDep7{:fs?묉ѝC47\܋Z)g ;GR:9dØ/sާl92UƓ! )epcW%h ']FFCϜ2g% (2, ?0"KAx/)Jz%6LIZCz+qGl?%sWP3MǨ{[D܃R*"(_aIJϘK &E"gɣN0.xLL̐IH&&'U#aTY 1c)L5òDwAڬfQ Sv"{)r $ A -#o( |I* /%O/<,T ̞kZ qÈdc%k4dyb'Jo'R{4Rp).xnc}U+@VcgKwIl2L%`*p%mϢT?5=0UK&K1,Iw5(RDV~ XXa!hL!ukHv4+ [wdxr6yZqv&af#Ӻ!DѰ䏮;:Y<:Jg4\ew1{WB=-;?>hyGW~SF~XYjR Kߚ.8hS TA??YN(ttt8{a%PH3sHo|ty\yD0\՟CsX_ھmKǓ__z;8N/y씔?bx7WZ;ǫeDr~Zgc'Qo][[6inFpBn)I-җfj녧zԗZӺ?FW' ޕ(!%oh]rѨg$Eb\`$ Nˆ@$b89jӒܲheObtӨk诓L5f@Z5]*-F?`xe.G~kA`<zt:LY湪)ER?I9 =Ӊ )_ 9Kz;sw84jC |#t6;F)4KXcHRg!=+{Ϛ׭oץdߛ㲯t<ys@9Lj5iwPy7;d7YzN~!mb߮tR?Ie;dd^8|uy&DCvI̚Q>*٨6?vq]s3*Z\r%a6af&<'e"1slI~3ʹi]|ؙ]ĆtyjtKmڝd]n}čÌoltӵٗfZjMˠyQ{ڹ`>P.&iGX:]4i%p@^MXH8|5 m\/& Q?L 'v|-&!~Z}7L YPɉ *ŀk[{߰\KprTLBAD@{NV MJ%ƩL6f1rR6J0-]!FSbYGR6Z Z w eA$sڄsadc% 7=8xKCyՎjM7t=S~>avY xm f0Kٱ.r,۲Q"s~AIo':d8ٶyZ"{z0x+,ve5~/tߨW NnS?U5w-+"Vʢ$j_9x !H3e-kC-ۮwlW%JǬ>8uJhYb*PA 7TEMt|IoN _l#%.+IiG+ pwemYzJfrZΩŀ_:4h`tV1M*$mSD-\D%Һ6,dT ntR^_=upқ2l+ \ܾu›+foQRa<oz0oϾ҇YeùG/zz35_V!^a*3 2ibk_OnjnnB>-]-9L_۱ +> J:1|ls ɍHp;%@J>r%]uVG`undcd;Gs AxPDc-섍$L0$El&Y>>08eQ̵@g>cRn Ms/Pj,,J>Ϝ45^|;Z4./_Eab*-WӲV&Pq!^VQ`U#hk1x2yuaaaaz|FpFe Č) 7t<1B`D嵪$b.HRd묭U0e{MFjF2?&f~vnDme6̼w͙Ǩ)8-6< v煨h\]`~pˀ? hS%VfB/>ixOi7}˄}xds.|CoTX lidzR0V5IR* cb昦֙Vt ٻWYV1.+Ϣ2 L#L%hZrJet RȒR0 rwwcvS cή]aҬNHXvv{٨AN-T9 8W\ZivpD(sy>;`DX2~x!y&Uw8wW!4!&.^]V.A%6*DٹBőK('K2cV%V}UiwuzZXġMB=~T Y/VAc}0ZE0yL±Fe2Iv0too,CȻ |>F\s,=rf !B6I̳ ȹ> mO{-oarq p}&xWb\<Hyva:))\_wlZ RZlZwq03fgVVi-fVb=Ӈ'7XU=]TvL"{3[<8?^u% ̓vZoY=zz9_{Sq_U q**vU>+UOο^f6z; ʺ^݂+EƷw,_U9%Sϧ~"y۳s[*BkK525.[l"x'gңT|a>j tU%t9tr` l\[6nʱXaX`_]TMъ1oB^Ktlߝnnv 0}4;MvO3O}n{X2Bv*an7 W$nR;_rc[_&ę?l\wo(g<|aj ZPl^lu jI>WO|I~A vrC=cO5^˄H&/.+9-G˝F}S 11e2/ȴ )zNpZ EPw`y`ࣱ&' ^oi`&v@-zc_١8[HQ-wwûcE7pGik,rl~7x!Ws\`c2ysX9-~*GEZ{.Zƻx4ZWprGF*hl@glLDހF{tv@il4w1]b,Pf &Lq OD IY8=r.x nsKku$FFD \e `{^xG{MV\;:N5q6+>&mv2E[]N>t_p3!4ш-%SZ}|>ո8c=nFyQIc`Ԓgд! X/"("tuN:QI*K؃>1' Y`$R ONKIaoM)z༷ qK`@̑읕"f/TeYM-ů#J}GSi!X6I:9yKS}VJ*%@0mV@!a6^#/br""B445kL}YK 5)W1RϾ?t/M]}9@ɮTH/W*(glXI@hNNyGܘ :YL!~{evDZBZ%Һ7)'k?ng|AIYR eJs& -9#6{JB$KŞ #xINlJY:d$3)ACܸQWQY&w)(B ̌R RdTyc7҉鑥5z@}YEʹ[Kŧ?o2IΙ2olt!mo {@Q(LQ"L+wm1ɮ=ue6+:j'2x P=f !ӾeܰWގa\a;vKn-˰BW]E]3ey9Ww,vʲV ~E9 4x])TXt P }f3^3r%#ՎGp#lu8(wrL~r0xALh*37AwVeK&ʙA%spL TNKAyhtLBhѱ8wΏ8b*'PZW(9,E1R2}arzCdg2hM(r*0GQF3)x+LR-wS9P+k>ͼ}P<{a P,SWIpINAppnH" $-HQiQ0< tZ$UZX,tdqRHgVWd0;mL@F]b\AA.?nM/5nAIS$v S^nKPP7i&C0RkFAVvC<ދu3mp2OgErn&ҫ$sd^EC4oc&ilw7_>򻋪c]yxSʽY9i~xq7(Nߚ:Ř-=iG?ղ٧'W*iWY ܞPoBV{=)B8_J\y+׶w͡?2uw&>}oƔVI;&g?s릁Z4+ ~~yF;wRY-plrdx_dK3p7㩞Uּ?g>䒴K>Z^JK@ʞ| ǰq~cGqeLe<7+3NΧ+6so\\o#V:Ŭ4'l ԏt7$lf a-< ھulʵwJdׯ`+f&ʁ@$z͛eҊ[TiVn>]GͩKgtcJ?5.ӟGcz8?-fhzdz9 v|I8ԇo(a!jhMǛ8r˟ՎLVcfYoz6 3C4ؕ"Xb|3;y^@\6os9&]U[idbZNsp6=ˤ X귛_>{<~ ,o, 2k%O3+ +8'gdpPOBvh@dhl)+MȘ$SpHK{y._բ15]Z>I KMY`|U 4}:C.$x6a<]\MĎW[z;]H'a^Ļc0W0WֱčYUK"Ʉfv +WVFX^E$)gd5]|'^*ȜuL%"S.IT&"@堠/"*3ҔrK4' )YˀNϬ LQA`1ښWWH3w~b-TD&L 9{8wEd 2r46| AFI9IQ`tiO+19R`g]*'+d\R%rgkCWC0f '具”6:6D+Ɉى,\n;\,Wcqd90cr.CpHp-;pUs,d0L2vD26K\:F@ο5hk\ ݁r*ԛ:iNEwG!dLP)ڣ*GȰJNZfؘk?Ͷ:4woMuDM5CyC/bZaJ ބ FR3i*yϼЉcfL&&! ˬܥǀޓL2G٣kR=z``Dmu#ʌz_pfMdDj'HpU cr#kij]]ᬉB@Pdo rxk*j]QKi< 6O\=G&H-+j.ޟ}Dۜ7=(ܼfzw槗Qx#bjXy ri.^ѴAܑ:Tۼd6yf S}6A+'kZzRMps*υ\yɡL+)Q c}ȍt=F@t%#Z+@೏:$-h' :PFƃ%2ߩjذQH h='."̽PAY R2lFz@׻;!σ Ǜj,bS_O#L0>i)Ʉ[1uy4.iVK8tP+A{_ʣ^sKՅ /Vy>4QQ3̸*e{KGJdzlSA$Sgd9 a,xsYqUJA,pK4l0vg15 u2zx7,4Fy8e$`xo#9.0|]=fC@>o#s ,nGX rhӨJ`&:CsWeA{fs w(^G)E/8a%-H`㦘3F tDR[\T~P(Z+iFA&*1a8\8-Pb1370Ba63M-\jvv2뗭Yu#t=:_f%|M)(E1HTutVDNr,{Iz2w&t21a%88[N'91I6r]#%/!7FZ" U f;;OKZ q]w)^|A<MG]1Z,9SNWT3X=8]_KYB ue Je ò+zbELjRcJ+f rh32OW ..,`x7gQ[%3BI&朊Hn@lO & jo'7_)}Q{ 8?7fէځH;,4T:7ڦ؝܆` Jlu0J(X#BR>,?\sApj5^4BY2è*JGACYȳ1^HeeC ˜e%U ܨ,L,( dqd?*B*Pyn<b0t`я`t_?-Sz;'$>+fLSuxx7Gmx۪dŗMŠ'̳9%0΃j@$!j1d#gFLgz:UϨy5#HF 8$gJQܑ$1-85.Eʘ Q y \p^3QS'^qY$eD K_ G"DKE.QJL2(dN'9~0?Դ^\ka-:d/80S[-34T/ ċmo\3>(R1r d#E"]Fmj|z:Y>8PpV93*[!Ĥd̳ >ɬv}?޻l8NIM/&%[`IT(0=Yd ׺vfJ/zdRZbZ4f/y+%/(oȦ&ܪm6uP,]Lo|2 yӇqȓ_DzӇUz8ǟ?_fK%ߑӑ"~ի7S.ZJRtdHv-Nz}ؚ~oNYt^t7.nW4Mc~=عq!y7--Pn>_0Xw5G쭮`fRw{1<-Y=>_712kڊ%})#zui|@]`Gu+zQc܉Q3עݓIMGv(`kʮ!]/kf;RQhlrlh_OBnOV)8Rz<d5gKmIȕ7lL`owPcrq/:5`֍p:&wd[/v?ߔ?V7ޕqdB;m}vg &l6O2@è̘"eR-/>x!)Rv uWwz]{d4vN[?|uˀ{ikg{&. د lQb2lyTlR[ȾӁݲ8AN~9On& u5_\cϥgx.?"npA*(&I*sAr'i Tq AR,x.6EA߹[N,%~})-E ,<<2(AR'6TcD%s W9 e9/έV́!]zךo5X͒}Y^^ !6c@Z ~zX>w !G[pJ`M04]I`~~^_L QۋBJ  %^}>y;GN2圔-F?Qw0r0Bd{bkx @s,ý&Rp}!Uu1Yr)&cS66`Ԑ{i."할`@Ci_FFiPXbPGqv4xk"&DHFE+7~t39w'7"CezK%7(+w0]cYB)BiOA.WA|&j꫹u~84\adKYvKۛӼglU^⍦ a0lL)Y3ռmK1VŸVn)rSCs?*g[ H.fj9;vLeKׁ~c 5=Tӹw9jqo6$G50DB}F䑱 c*)ȟ z6Hޫ*H\E@d`cZFf($ӅZ 8Qc;Zt7~]"!^t`1,8ʱN.2N4ƐQH" Y~]L᭍c]{M_ oti/bFG1r0lݾiX:rgQY菵g8B %*pܚRG')ƞg FN x5, piPPHB ;\LZMQ[%QFΖS>?"m=B۝srfJGZU(ݜ\tY_Y|kӲ^wʭ{LRqƓK)$ ;H a[o7VA?>xIggAm6,o?*0{T$~Vhô:iCQ/I єjp(lhqǒq/`78a?nBfUpõR:X>Ad" 'mMO5m#HԸ@$Q~(]EMr%eN\fH&!hN]Ǵ6 nLȬIpC|9Д*}&ʇΉBT*@ΪӉ3|ӕ}llK!sS±^YY u O#Jbթ< !v\[zvvz}yu: &Wyjn 0%s?'9pPR,QMe{]h|7drO߱I?քf\R;#/\W xAD@5n8tz0jdVdW3}ޢYQ"#!Ts/gs_:єcxu np{[a%8w]vR{(soxϲi0rF|NA83ѕwCdg[S>!b}+l8X6%aӫ6AM*6xoA5w}+9LÏhܗ]lօ]ߌwuQɴ3?Z+ YaXƘUib}UMHͦ/griෘIA3WS=퍆?)WWUy{K+jġL.4?.ƹP͠h$7Osu}Q_)S~cf7?'/K]3?ܥHI3Y]ƒ䣇ScH3y]WLJ\.a|.{ (%^MjT@[|%oG2c ڲWn.?/kHE](r9;LV0h͉Ҍ.H/nh!L&F:Z^z[$&y ; U',3ֳMhK2ZZ#Vy7PFe:%*EKhHCKA&q_ZBs.("{?W 3[w UO S`iUջ?ESlP t0 =+lRp90zvovhO;o53Af)Nr1 ^4U"{mBbU5P21C7D/(18-(KW'%u;˝^ңZvFΖnjm>cǷehkE*X 82񤙵G>o P\>xU||f7ޅ|W0w\KZUk:SW\qa+"/㭯ɭue\r4 l׮ޥ7Q*y\0ּP[ 'dSV2QԲ9z]>&]fONjUע҂}j55^|;5 5UWS*4tUm뫻UiA%F; BDNj\PjCפڗ"+O4yvC{mo*du~-nᲰ`G\W i!ĩ\Q[.̢Y/8ÁH,0%b*Z5cK?*~>y:_l%蚒h{;#җvL\bay>[JN#*ϭu!-zKޥz3dYE CYOW\o {)%PVBH g)43^v@;v:] )#FKdМT9-V恙`(nI6bG\3uQ9u)I"i%Y4AdCklAj4#E8!Hx ՄT%ý "hxdS/ [E$9\hM@ÈLRA"รR`fL{4Ի&FImPMr!x a^"7DÌN% $#[SR\u)c8B%ւ֨wh*]Du-Y!ASΧ=É={&Gۚ ܁qוa΍߆hXzf` G }w:#Jo>\Rawn *6Ŧ/}߰jWM-;mW=JmʷYx[9i&pS;YEy7sOWnr~3^I+dqEϣW_P܌qˏE&{&qTjpcI5Fʯ<\JEșF 'Uԑ'Pl p!JݧM]VY.x O/Mn ޛanZ*ϵ90YRtޫ^~VD:j>LTsW!37ᤲNBDTv/נaۻvrc)~Df,~zf9 d 9_],IpW1w-N f /##w$U ^.1eGݳOYoOȜQ9( H: ʟ“d$"_RDnD:ik*j 5Ƅļi O^klv]Kk;^L3XF1~Kjsb#fb7Εy W&'ڊ]xgН5L0`9kX0i9`.zԠ=`3@@ޡhi_CTDTj&`q"2h8xbV$F[v8q$ف(^Hո&beg l]4ZR YOegI0g٤CQƺdeDA49GQ^$@>0QHBɶ+R*i.KM(qZQAqBqm:Ef W 8c(U,{V:u}j,dYJ$}*r>\N>6N%9WBd4~G4$Gsv 8dh5Z*N)U )X}61UQ,ԐqT.gz\ 1mZ&|ۚmj٬yDHN}}J7Y`s͎&:EWeo#6G#Z罭Iɷm6I6k}ZەjVo2ZmFCڶiX] tvrWO1/.*w_l#XB좹UN`n+ y~ٙ\~)ӂ4ϧw'j9%gVm m 뎸٨wpuIl]nҺf,Q=׬JSe^8 Ƚõ3ՍSQt[Ln٢|\mu|3 m޷QIݐ\# YH4+)Y4۸T_yBUkl Bs@EUөuht{C}Iꁜ ؏OWMx-{`63M-`<@Qck\vum4]{sx0n4ܸghPhVeֳ"֚mZ̛Ǵf=ȶӡ~jѼӭ}Œtݛ5ͿZOlbIX0DT^\eM^R"(%&:.0&Qzכ;9)K\B;2!sC xCB0.JgFSPgvnlz^@i]^~f;Ymv5N13Imy0hzRMH4A*ܰ* @K)TNM>FHOx^8$RADȕ\,%b[y~`$HasRFc;&OMmDB!~ZO-3=l]IL"Rv_rO_ c)M]eZjf]H\88 ^8nP .aKOvhR'Ñ:e^(Q x)Erd@$ <'hmrʵ 1]SShc&ijb NP*D,eÞJ@mrpkfdRggltA> 1,{ݽ~y t1tBT.8áȂR\o(&('he(.8pQSsRDjg_?,'(#[>dۃ8lr}1AIq:};!O*gO&pU-U49k߽Rңu}aյgBr@PFI VÏ;1).<( wRBrh 8e=k*A$AR- lB;#g=2v'tqƶX:Bcޝn,+Źko{oj? Wt#vQ!vE"H,8je$1JĒ' f.s8fQI0deFs \*/"(fJm& #x>芜ƣb jwۢUڽѯr,Z)*K\ 6r$ F< iȩ%C*P<\!/YDJ .$ģR \+~țP3MLj{DZՌ$sN"A0Qa#jqHɜh4%D*qąGM(X8z"(Q{4[Eg΋3r#LjSCuv%[bƤ3\b`ReI2$WmEy'k.rM>A4*RqǶx:C׹=r5E䢳E]cv\<ޏCsU8 yx6W TP#Dq'QՏ1g;-\r]DPNL%$J,/s`)UHƿsufvKҧR_o1ؤux7JC7{zR޹y튻:&0L~]7a/c[MXK`$%z1qpG\ԭL8^sЯN2 ꥍNRita1si= eoiJ1Ot|9CxQ&A  P\Xd$F(}>hG%q(L{E7GWy)BenS~m6/Tl҉+;ur)DM-& ?ygLDI`Q"NV#c+uL }VEc,#.],l 8*fx_?2ͧq+go}ڦGFQ/~_[|l|sc22ejرqoa[0Ԋ=hWVڞFoDw&=I]< G?T NQ]~>oysN_&d\dxr _ʽ1ۧw9"rw"1v _jNTEqLU%\NV rZ?*aiK),c9pB1^a[o "^z\kK 7 ۛj^ESzM%u աԼDͯ>3y \֠^D.[[@n,q^z1}u2?z.vӛ-.fsY`OYɵyjDf˜(?%b2շdc ZLR8(@,,M5ߏ''(A;s*"٬ W4VRږN &]LL+:ʨMG;%).j%L˵:Q-:>gcRȍ1Jm}Y~kRSݬ:R$@nP=iCu&PI2ԙihvͱ7=H2Bjl0@R)oԎhoGb1; T+f\QlpӘq.*5[PLy>ɳG;/jy}}6)VEيg{)UFZkN͔5z v8o)ԏ/rV2R['tLdwlT۳/Wa}iN3󟋥3)X~x))*}b|t{⿥CePoP퇬~퇬~XUoP3i$C.qyrR_L&x)-(kց%dGb6﫳SO\"MѢiv质BѬ2zAFYgu:E;ߞB +q&Xc%RHo%T2įJ,@^\ez1pA^ \ejA:\e*wCHW}sp++rWH0Sb*K+Svp W}g?qù=/|}0gvن8µv5+ٙ+4\ 39!DhB9k!8M Yɢq6T|*uQ9cA PuKO k)@|b**K*/(:wVzb8>j` ; /Vu}>Yy'mCnLK\ JK0V'K3ܞacnN$GUs\QUyd['DT:  O-0B'%ԑovT+ ;.O:5ոqV3o'v1Rʞ? JE㲒;̛sj1m<& OGeˣM؁ [G}maG}ma;K>^2;S}(I/K$}(IJ҇$}(I/5to+('Մ 8Hv yf,:- 4R9IT2Y"="лŅ}%CB˦S-s󞋖=HzBGR*.e)l˹Q29(1/ q6! ]$Aq*Q9SRX&"gt]@Ca e>|jϟXf2 =w= 0l] w^|&Sû;fL|1)2#RIi HӹF\;:٠QqQ? +,ש$;4%([ҽl(Q x)޵#bS7; 0Oӻ;AP$7Hvn~G],S7:u(^GdB >i@RK .G"ߔ${"$r(A1'1HcH"rH]1h i~T1In֜$qkMfʋHzO{6:mGEEЁRiP >uEP VUtژHYahNDXJHd h[&TI&2+m5.ZujfEY4; !k3912*iu-!E2,X ]Լ rB%# :{g nksQ԰f -J\BT)C)=e'F)Fel͖֜Vif IƮ7n{R}_]Wd|p|}ON;őgsJ8&;hDW(JVo56AIm\a*&dt15=JVlj3љUȾ02Lp%QCVl(/QcͤcW `zR*:9_: a-R"Y*@:hhjuS+ 3dF0N<ۚL"9E GQu0bHLYq=l֜'L0El&Z"`N-S4)#(e H&im#`JфAb's*KM-b)1PL*0L`ȑ^LZKjYs6[:]t3:fR]]b]\{\Rz3zk0)+@$92ɇ &RQ..=l&CL+&_'W_#7c=u蝳p%UOoMwh윍3hCsp! vzܧKS(7ݕ';t02ѭn;99"_ٙ`CVbc %,'ecS@X|@>I[P{DR)8s]?~1ŗ^M&4nr{!bnd6ʞ\O NyT_1;m?<L~1әo|2{{3cjލ):׼潫^BJޓufѩh٦?b:Ҷp_׽?%e;exHT2eͽݹ۳n-\ǨVok-eul~uuf?B٬SY_yқ -ԻھmJɅEh}n;N̼dQ=)"sqUEyg`ӘZu o7۽bz'ۿZY_O.F'z^ޞ%sЇϷcLTϻ9Jz}yM]_ϣY~~ |lr[kf-ropsjE!EMt=xv 1.s\)aŞ['Y߁8YuS1Q1@FgS[ @yÝ B?691BbeLq1RVf4ҧ)^jlrܻ2ði^B6C$g @RFΆq$C6XKacUttVf! >פ[թ1w}K\X)Άm9S+ZB=/BV9 K}yEtZẀtJ%'eqTxۂ֋x 5޶@o Y\M/,b; uH rJ iݚ?>} k49tPwQ+ol֐k].vBt蒧d2@Tё%a) $hz(w"̕P57ZYo,1 `p5V:7al>{d88 }z#nnTF=X穫ۊ= x}pZJs=2s{Pf=rY"KfcE(UW86Qz;&}Yo1lجseboћoo1^<3fA\+qݷ4 V;̠̠0d}HX7~Y+>o-K9йs5/v,t^A)%1(Z>hY;Ek6NS[~(Q>\ Q!9J I%J џ@Lb [S~D Hk_c/7 N/tzaQNg9rTr6; :vר.028r(~64g1vCswCцièMex2sIAθ)dvB0-*G 8ik`"@:124$U9_.Yb4xGYsDT6Gb%C=o2[f7T=[]zFm"bw8FZ[Su=L~fT` tZ^_=hE +ܾu{›+.lQR0XSzvmdbT#7ӯn8#}_W^! ќw^t[B³*u<]8ţ?>]n~ n\p09c Ch@+T3JhNIIGNu ^GO#鐵`z(hc%+2:. \G)n "DɭҦ$PpEFmAcsLt*A:tl&3|LFq-NΥj\qh%HexV~06ދY1^=^ieFKsqglv]:oB:-_~idx$Q:[BLFwsFz!z1DGdwj^=" k| h}"O/sb&cV3 !T'/ظmveTԙ-d7*ZB̄EgQ U`l!h*؀XDh d5gӦq]/|="@;ӝ.ߟդCҤ[Z_8|}|€f ݚ%(}]YR殕jNQ* DX9q9ܜcea!t#_kJ5QB98HTF2EL 85 (faP O*jUH g"x!e 1zj$,s ROHG?h~]{.2_YT6υvjsT4CC:mM~9J3^Cn%A/Ĺ/\3>(R1r d#E"]Fmj:Lwn }p>Furf TCI%ɘgAHs}Y) / {^=ʓdq4[y糋QI]jjϳwi/|.cͿ魘VrGĴh. f}UE|o*uiHTA-t1mܗB޸nI,. "maՃ}(;R<}-Y;r]::Z֊Jrn:#4dFR$Y7m˧>`:Av`kk"[i ;ݽۈ՘i> jvyYDVs6Br5 -P MSh hGqf[֎vw,Up}Kݾ_ ؋Ih |}ߔbʬh+fnhKIשA]vuVtzE5ե՛QʿG뽶(`[ʮ&]'[f&Xw!1}&XM#ԄjO)8uR;-d5g7d1k_ؼӚhc,݌~Y CtjlvOuJNg6[J])6<@2Ʒ~cO9tXV_:.{YjC- P*R66汨IS{N[JUK~kD5_+J.vLp eXy <f%/UV2{/' u//$b^9ety Ep ʕ>i,M ܒ3qAz=It8(^^Y!/_o-,]z%پ)b?$kNu3N#*5 RaXM*Ukťg4zc y뭝L5T ]M>)'Oי5GͿ=a?f$~,yzZ<~_ƭ=jXf'xn-1h"8LgUݤ+9[d U7oJUzPiA҄n6Moho[oA.< UFۭ9~C4F5˳ϓ鲃D-ߎ.3WV暑ѿQcQ&7չ׊j}]oXϾ^8OoSB{ ť{գ dG/.k?NNZfؐHzԐ8z9OK#l0fyDKC A %m<K4 ~i0%MrRKzR\Ȃc$:T*jN.g6c9"yIzmkm /ټ=pbu"֜Y,fCم]^U4ŭCqɝշ wK,v~n&6O$}<,u#ѽcNvޡjGpƒZ롤mm| Y9n$kJ|"Wn/UWU/'r&X9q.`.s l{a,VRa0ҏdx5uOH?j4(Zz}!qo8F;dـ2:0Ofc@?FvlBPw4<]'iu p@sӸibA{ 2BY O\gKfRaԩ&zrqfEBx,0#ɞaH6q^9"g9d-h8=+Eyߥbg?J]*iFp2!ziyhY1G N%TDR&L 9{8I#3(%h%cHyDWVA%SpL Aƥ24kJzhz)!8CpGxEճNG]2HH€;4%hޤd &gC ,L[N `+W,&P"{6Jn 6,ž\hwK Xb1p o)!ZaMFN"7rU-4W.cB1\ɹd #$ÙVxPTI/gzy ŹvOhY{Av;V p q ZF\NRu\^b4E٪)PZ9Vأ/W89;歼OGr>pi@ hbFuT) HHx-2B`*du K ! M&Y=Zaa2};ԦzZ?i7y O>12l?rZaX}˵?# 3xܫVNı)mcG+~,0z>_`.1扠S?BL)~8 L4,hQEV&7Ŝ1Zx$rC }HCcj{mn'qWguNC=JQL:UcfA2K^DRk|eI*+L̮_ C֣qN%rLRMG>qb$JȍHBþC#bgoY/@KҘylkW+_vN`nW YigQjdb<ȅ]Kj­clHS# Ez}\8XlHlmFƽku{)'J,O}D4ݲߟ,i-S@T4,jdF4>"TFg$7u6h'BF%;ڔ^@;,@?{\q+(c5Rq:*$Es΂~# .PsݠZnA,=PmА|!l@hoR}C ˜e%7$'!*W}!Jg{AcY˜xAZiWz 5=6DgAg)Y8٠xQ<'gy(Oٞe^yz+TZ%TQJZj6VbTo18%<}-u8N#m#B[1sCi[v4"bb:SGgqB]=.ԭD}(UGu%\jKO7렺G뽶(`[K0ica!K D|LQa[7 ;Mm4R?]./*yFH8*6 Nd Y b 6/F*-a7@lx\>:N̶h 8c~"D[vsn;ȪA'W -cu] 1}2´ O^V=kʫ-I]#yP8,7$g5"nBt:ba`68(WD'뒮\-% `@\^騈Ts K l8ϮD%O_(ZBviI##4Y][c9US}a]'H%\>:y$ sT_?qJ]4)b6P'k3hl/-YU:NN uJ~:sx E2HB BQ*G#DžՁ@rʍ1g<*@ 1P ༷T,eÞNH]rg%fT˨lb>!*w,>~>.?F"H2 E>1/GY!r!$1rKrSl %ͿEp,#h%755/ך26͌ sg1FX,xu{*!ɮ!D.Bը$Jj[ć49-j?]n*{*/i&3nriU*oAR7 cI}fV2<Ą"-=U"~>O1TJ"zqrRM09-7\z_~jJnz!4m3Ӄ0Xl|($/x7eϴm~M.eStV&7Zss y3ՖY YSQr &=xJhóž0DӛO7|pYT\XHBQLWԂ#1IEb I,٨/2dH‘xkofo5EYGWzh TY'y衻˲c"Wيq` Gcu4q]o[1 B2*Z驎7Q{lۻIi^̝?Qz (Cn؛b|Y^zw\9}hܴ޼^G7%(}U{t|||Myf-O x͠eg jI6AXqVW:ruS<O/PHEȱiô ,5"&!Ra1D "PJМK탥bwvE{,=ĻGK↩|2 $jDU$ j-%4 y8[T'N)ܝ~7iCҜNN%Σ5ugƅiIs4śO5,$n!5:::b9P'`Z|6B m)8*H19YK10QEv[V[:TghZOvI 33I-:(\W*9f eP-mJ _8+^ƝSnrq+~g7:RO~և#˿17\#p}F˗RDd38(ْviY/}$"1\v w^o 8+|uS N3QgVqh.&kn[+E/OS};)q0rŗ} !$)8e 3W(J醝VU!+Bm*WAU4N "dPHJ щ%ZbYYp^tj.*h驍v|8_N)ⵉ6Daߕ#"SDf $2(0iykex~UϦMjHVj}`{v#cզ}fFzU9w,* }_}FXI#PҫB8 ǭ)(ux"2kC&.f4&.v NJlףY?<4tTsĆ2W:IB,k KT&5""5XAd0(45u1иp"qs.j2壶4EKbAʭ g˜s:?!X~]/H,hݩf76"㕦 ~x\a[wUJ=*TܣxA I(ǥr:% x@±[G)B|E\Or{u5_+a2qI2V`T#GCFe-fE$X~,  旷`pZʏͬې?*yE8EPVJGK{ن9h\D" BP#6:K9W4C)*h(iˌڼDi"J ]F-uK{x׍I}5`c4(wV^$8TG&eGMQw~^/3)nW>>㮧+"w<MhhbU2Evbw{ ?~E}~rABFK|㋰UK׀0β_ G;{[]eeKz}Ok.&HN&D0$aA%37zo*uՀD? G7 آ?./NJD7O=H3o!T{/{_]_j5ys.S<%PIq1lWX.8`|d.=C-,]YTQf F+cxg1ٷi霪ǰUcl \ul z`ڛaj&j2/5Fqfy{wLWYȴn8_ Ղ^=X-w`[ԩ{n{a;\0euCyÚC\h.@3w$&qt3S+_7OJ]?fNN5;U6&v1\֕z䯠{ ǞԣdҫTI}:'P^=\M|?$#X1\mُc7'RS=(B(Iz*UqːӒ%hL4:$. ӧ/JV'^i'ndK,Cb"$cDg2˴ LNY[4c_r󺿝vR;4M~ٽIg;ߘ(ea2,"D?Fz!𠗶/ՇdH@E*PKodNμ|SֳϛΔCJhi F I!*(7PFy:%*EKܢ&SR&2!}sW5IwsT[1w}~φaŝ秒^嘀-N 0!ˏe- 0E$<J^}4f)hvN X>W]8TPHe4"@*!^ sq(NUT[L\Ԡ[lt@Pͅ˵-j^X"% 11DҀJz!DF[6%j4uť߹s"Hx2ՄLQюK!r I ,yA:N9N.8-H&MbOh#FdBJɵwTRj7+[I#t6&=o~D<KAtIF+ӧ|չʟ&! qkKwh*]D)S% '0Dus6u} ojZ/ilAd.ŕW'kν^ ?fa[ϭq9 lȢW[زun-7w^oߢ祖a<ot64?E:y8/YQT9 qC6ͺܢS&ڷAwCV՗5N7?nNB7 80ҧBZ"#LGo%oDQ W*oЗP(<"_1QN!39>ͰryO?{WܶlHsrJok_:u)1H$+I"HuvY=nƽ/{tm@?͋ ) 8~Y(& %5(׀%NjuV\}ӺX^iuԦR`0tWUWmsi,)]R sSLtC/G+_zXu>jlsV5gҳg!QO&)?_u|:%,Zs%9Bk̉f48+<:9jϓڙbap0w8׷Gy.AWɹ}I\[=ɜgq6IexpU9^zVK&szÌRy^( (鬠ސ1xEժ2:SQ/0I D*K'C#;U:b0{]H fcJrQ% KTRqt+1v_d٭v+f;lǽzt>"LnWCdX㠢VTM^G##/pzsj]?_4UaM7owH#<-pHzdxU{ 44R k!۱ښ*@)uO~{n^r쬅 $g\KrTS 39cK%:%ֻlБBj =QtэB8CX/QFZ]bm[o5GZO4xfG3ψ}6{]iR4c zJHJK p;aSrt͢DnӉh0Zf1waʽ3ø)aΦ>xˏ)pFl83++Bm= ykW;QfZlt {ݫ?"?ǻpo2dȏU{UUtR.>,"Uq=yU8[*Z~0z?žFI;w uk0U"\ڷX7W s5fr@G<Ͽt&mμB Th|W?rv #qx jå 20=JSD`TIE"D(Q^v[0b EcD2Y+ :R/5e3тFQ4`(ʘt鴌~YWG+:(o^yTEh\,\VKX}?>oQq)p6]o5a'vlQnf[lES*r4GIg?%7ih 5>O ezoa٦dvMʺ7#hJObAg&Xࢴ(*$r1GLݾcB)7j<[֑aY$A[R.k%!RHDcHgY,JBoz4.i1Py_+rVaOa~eZ[hmz}%b Q2(`2\ /Ҫ#98. >(vI8-!<[ڪ;mY}ė3Dݴ%<%ɴr:ޖ~YΏ'u4+Ǔh0. ^}S;_LNR"|~zx1MHm"n~ 7e/׭&woIWޝݟ%b *E"xE1rkY FeqLyXd"Hu4* )h&leF*m0T,%NPL2]W1j:9x)z|cSٳ'_r/)ʤzp@[řf#P@$JpJ9B>g$A~hAj[1Wë^sX6ۉ79._qC"pWFٸD 435mD6==^o&Dahx!#쵤Xr!4)pHiz.RzSj]an}z#\I-0M 'a$eSAd 6b4\Sz/v"-Wd {S!RciF2;aruބ <tZ7;5]?Ah/u!Wnwq :ybqk3e{,<5hΔ 81a^@E)Px2%使s _zp/ΎqQ z5ID D% ?TRP^d G&uBŇ{>I1S85w[fHږe7Ї]YEG( /Mi [ /N6+Ҫiϰs V8s\R5):4XfT8Z]?!)uΗtSDlأ[ d旅nUݤWMSeIE R2waז]e㦎*l:ZW%׽J?t{'ͺ^㴶Ae&> u0lŠ5rn49Aunx7ukVG/($W7,^IָO0 qV{:i_ޞtyc7m_4z<`? %TVXlI_'W7d/d9= "Zw]1J1[;4yV:%AFiBUO z rؠz? v[nqI&j\t^1T,3tkti]9vM8_Q{LjȌLXN n8N``D,ei=A8tbV?2!:zց#W xvAw#c#gym$"_RaRNR =cy SmB(œ|&[JFh6|CnE+] (ۤX>zY#W䬟ۓq7\}x<86 "?!'DN#˵DD3MnA]ȁ(:Jj_g)5J-†R$élbc;ߵ--WQӖŘɛɎj'GVﺠc e & ¼KeMX5т{Q :^.I89?r ڽ!M-&Tf:+^yc0Y doشɯWl)`Dv|q RwP0vY4a#A2rVk/;^sXe U7]\fÝci‡ nC&ﳫ0&.݇W }(̻Yv۰Ʌ.&|4-8먍7oEEa<͏i FmGrùiXs(jw" ?͓x}$4Ejai;*]LYvkb *gC=8לRXB̖432~bFN>eƟTtGIGtD*@|(VLJa*3+I"T_Ї5+ξE+{  AH_!3aScwa|z?&-p[WVq%b4>)tQL}ur,sr2BϧybR9c6(r2 )&{==Wbz߅*^c#"G1!8- [.xkB@T8L~?1+Er^\7c("0CWx {p"+[[g^f,>Pⵗ^s ͝ܝHA'aPUVL%@u|^cXc!rs :,"s3Q ɨ }i}iGZ:!mzCbD \b`F[ R%8ƻcD9DŽRn9 *Ȱ,\/ -J))$ΦoHgR2ap}tE!Ⱥn= _]qƗTeqt:)TRNg3]moG+}9ql %ѯ2cIʶ~3CDPCƋU$pf駪+?,Xdnֹ,μ Q$k PPi{7.Dbx(hcVUShc"1My,ݳ*QHkfdˬ5q!V!* ?Qa*f$3hb29!U*eQ7v2ENaJd|I.h-D<;4@9RDWtnCUr]6К[m@Ht|Q %MYỽR䴪HCwJTUBT{E]eZ|w)Tu:2ae(bJ$).J"*#ZXtH@BOkB2N[,ۄ 62&Wi [ӌuPWDײgf\M} 8o n~  FG$Bq!dHyVYᨕhA Oi}{v:4$ٳp˰ RyɝA1TBߎh7|L+RG[܍n4 .殠vkڱ.jQ[u1[~czHUK_`4UQjF9#$ F'LOzEn/R^ۥHdiD.$G ׶s7VNZf]Ac]D4-#C4^FV3&H$Q=p0wGQ"%s҃)ۛhJT}㈆GMY8~<(GOqVQeDlMC'>lMKEAEA:\pZ#TYl@'0|hcA4񚄀 @+41J#pq+xؚve̾^_x]T%LC]$ⱛ,9u9pČ~ݬ6e7hw\iZ}"1E}ެjjvq8W8'5ūuYϽrZ>:,Px.x qy<+? 8i?Y{6S,k ~l67Kh)%ܖ.)8"au&3 h35xUㆲ}V7ZaUʕo 띴 I.8U&<h4)MT^zWm-ZGRH#dz3칳j 9qA!hEk3|;HX5H$}_Zle)WrBc|?[1,KW8/y*Tb`1=a8K!Jfkx #+) |As+lf^idK ghL)J3U yI҆ oTED/{SNP{ ~{+lXRXX2iH$ PSb軩驦e[ODN1PD)m  @dT Rn+ΐ5[å &3nN/lû0Ƨu(-Uqb&ޢ;2qGm,KLqy98|>}5zΪ]1\pvˌ*ćhׁBsQ;qdsѭ\єU [dzOzGӳ:u%%sEGxC>ċ;=Xw{ z]֋0>ۢ?o_IT)B8CźvJ2PǺ>Jg'ȅ^ G];Z˓_\|7 6'j>Hڳw薢~˶R*T"Qn8qxsԈ5ͽj Vv7H,$o0[XŪ"ċvdTkf(љClYgnkT>ĬV^oۄX26/5cSO&$rqwKZ5 ڦ1v3'tGP~vz>I^藟ŌOS1_V8F7??%zedJ8~_ҨK{ucx'9 Fowm\a{_4ju%u+/ŚCʿ|PX?a_k2um&F_yd2[FI&P-@qO2]_C_~UCtXn>UY(IܩR&cFoj9 jE⎡x!h +^\t#:$$tq$Nd,G|O`=Q>u?!L.5w .Kg(QĒW3Tv*L"{ԄAui,Mlg k&7@4[ԺKj;ڬw9إn'U+NZ!8LF8+ߔ5YΥs)OyBBHr2e<.t™21Q0~~W*6=n+atDJJMd7"tJTAsC`ǏlR㾁Fȫ,91{} 1r4`n^b{ՓD6PnG˝N kc8.L"%YRɈ16x]w [V\@6h#z"z+`{[αvqɶcZ[bZ07ƴOO] d+&qD\|OX](%.8Mf @;V&S& yT_"팷m.#85OSrW_<`m-@w:7K,vɽv͖a=Vtojceq5ۗXYZv6Jɉjc=XJ{W(0'zo* t_*K+ɮUR!\i)ާʣ(0aWY\N:\e)•jGp3`{WY\R:\e)U ~;pe90Fo5FzjΏNOGúF%7|:VVq(֡ZKK|Pmxs^QV9F{D00U2áY/Jʤ LIW.˜"6ˏ_iu7}s|kunspHY u5yU'q+ vE^1rF?-=8h᤟7)G2rz 7W g3cQEҖT%hu|·4Qq?r&4+}Aq R) 5ZNC)!@.^ /xN^ȱ`gog-SkCffv/+ >z^>ES l{läe[,&%ݢjTtTmݡOpְ7p5r_ ֆU!\1`Uؐ+/pf*K \=GMP{W)H7pUj_*Ki.xB)uplu-^H]XB-*M^P:ί?4/  c~WGԀos|GLq B?/Q|TJ*u/&vxl{GHZOW*)c,sy9 R yMPzp\#+(Ku~aw)t7ʤliWYEyTgPeBY1v{o3hgJށM8ذ 7Ⱦz)K 7ehsJ]xNm~C4Vs"J-]j8m Z ɐ\"C$#.u9t JIE1J Be`6mY\orǓ%xbխdõg on>Sϧnѡ[99vv9ATQ5 wR-P O׌RAK)TNE<8P̓8Xܗ9!_Cko&-Y6IGR .e)lYw s9e4&*;!r/wa=b$!\LT&$3⩣FksN@jH`k%%zק 'OJ3k7;ܢp+'uM8bnbS? 5뱃M̤ԒM`(@Z6̕Ɓ\ L`uk׎u>?)خN:ג}֙,3*d* bƅH,p<O6 hcVib1J&ijb Sp[ 4eb @kԜN)}ۋmsƓq3FZv ]JCd͆􋬤kٻ6r$W4s@|-ؽ`n~ &Fdɑ+vdVKܲL'jvdDf ~~\aoq?=##en56_zܻ"_pcFQG;7D?>Sz~\F14j?6QY͙cRN3v;!wHQ;iH HtW2Ӊ_=ymu)NYevNKupRp J@#L@LwAjRB( ;t艎'IhkhXNԛžo90Zsg)D/JO.CESk}Tdؼ^^ޏzoT|SCxcX6$|W"ڬ:kޤdZL10w L,Q$LR[Go |M*oJ!;hfuJ|x0FmAa dJ8!Y= .@/cm?=0i:,e e2.[0RER`'!Ϣ%05=tXr=ԯ!Pn, ߊߑl^,w<]|;O^3 ȼΘOh\VkVZ낆LnBWׯW]cΣīrnGxYjUK??ub6UШMT_)uO,?YO(z#P*exAw|j1}۸?)xu?6Fwy۬t՗ގ.4{swXs5|OWn cJbݻOwԒkiibj4~ysOF;vR!Uyױe3nF]#ZJK3v녧zׁZtqҧ>(!%ijs V[zz\`|,\'m﷒r.'1}}ö87Gڻ!sos1Nyf z:alp t7elK ƦAF9~q ڗnk\ckgJDG0~Inq3KeC //FK:F{!1I3l6O91ܗNA'z(n- kw;zd`!P%$;6EQ̳-:*ʹb m9#dOS'Uͱ֞N˻j,\I ۪O SSn:3UJ:]ѧ#AK ҌJ% c[az"ЪsYzp;sƧ;Zg" E ~DHn4Bw&pF"D|w=)}iKzl? y=3;!ߩX |FF e3hZQr>,L8m {;xx˭ap *Lީ<ggxc #B dVii;$'G?xsѽ\|p@}Qj?Ge{KKĎ'kpg9ȥf^I}1D)eUlp<8l! G(Qap^9x5 V394- 16BJ|Y6xW]'JnOM ;G+ xUlF~ ϳ [QRbLq_y_ r9K_JOSEDFCbf%m[˶4^GwHڃni#]"]#!Q$}YNuv>>kQuQM-ڥQm}]ѿKRۃo.ءx`jyXw*Mlͷy~[L:%nǪņETBϻ$XU&;g+Z1-מ0*PdXn]oGWrp:`{1~Z\K$MRv`QCƁ%驪UKglܹrj4%ܼK m7+uMHȖ-׷$k|c,NHwڷw.2"=ڔ#M:ƌ]|7O%RG(mu> T7, P9 ;JUĻ> 0?W`] <0IUx]ܰ@*[ Nc|;fL;d.?9ڗe{=Žy۠˒۟"q㼠" D "r8:؏;'(`)L9 >IX}vdCutA" P _~wgF˥Ow&kڅGNgKE<~i4BuU<(k ST>-*XA쯕|~Cxz0 ~ bz+#aj|XjRևP ,arQP&]2ػAErVl+ܖ(ƒ?WAQR $C ЄpmVCWkǖ'\rouZlPnγ긩냨Q}EWIq3f=d_mVF*&=QAe0v~`x7f7-3iV1^# 뭤?F>;z@k̉f4gst,?T{=jٽ:f(E)[P=I_!4{"ʣ7ʯB=r/?x{3xZyvr$N >ZYK0X`"kĩX&HSE57V"FM\vF.x&9qv>r:|ܸM[tIyAG\&"u/nf8'֝%Lsv91^rÜ9Q ,1/OXWɍm/wr?Y e*c#D4F1BR,<؜Dk(2H"s$om%JqTQ( bvw9lVJHH!]uFn+e] =GUN'Q4hoyM'2}V{ʸ yeOAF0M89! 6\x+fܛA/B#5ׄ@Fl0% +<4ɝX$FL{mniZ=sW`Jn(:aXiVh5|ng M=Ż$t Ri5w!`cʭe1Jw)/LV& Ą?)#chnqD3rZZU3G{hJP~SX˓%%~k X+[LlcP`4b9SmڅO=̋9Vtg5"V6K ` Ȁ.&o{׊B;lѮ;Cji°smݶwf^ L=ʛ端nQl.˯ʊ^ԡSoqToY~zh[om/}UcT,u=7"ߜEyaa*Mnə&W',|ń匫[yN`A 潓~ '].G:z4K1. Y (h߷-m }ۂmA߶,,5j℠`X]A#SFNңFNT@#sڼ4 !*xbKt5"BVS*L9K(IaIR^Q*"oW"2_dףBUpZe_g3~jZB?"ѯAa>MyGXhHk^K%"J#Ȝ#8%.S؇+6gPqVn+q~-N?>^ؐ+fF%dilaNg1`,0 ^v0ZkhW OӔo V9y%'Ji e+ f)+Yo{ Yw4C'Wyh5<[;`nfy&@2EF ߫Fky V A+łB2Ak5AHd5F[k.X>#U&5K:-Q"gDe<$S!RciF2u/IQUٷ4s/uմhٶGC2vu/knhwXC*'#{ %tw P|?s5Ir2=Vr3#(4N r !0eQ #T/N,j82}~P8 g;+D yP:QWIvd)3%Ey4n>ޤLxg̾N×XS]- s3$osk>)Ĭݓ1p5l.nDu8k63슴N*Z݇.)5)^uJbT]8Ez-X|Ûn=gqmZerڎL47Ǜ4~'"-.7M$者 D\ji\~4f`+0(si#0N%S2xnR,aA7wn*ܴM 7RrN࢕B&ȡfw;bC +dc0e՝rN.EFݼ_r$_Ig|ݘ45W@1QUWk*Rm}|~=g*Gt9awG5n; IkZQݰaZuHjqv2 o5cڙg%T}}inZva7ttYr]98ξqCq\ I R1 ~49@AKYdZGiN#/C^9|p4\ݳ;<; 9a A"R%V$E3H;զWz(ā.~ga~=ykt7Vߨu{N|@y"Lv~؂'0<ܯn+%dM~wSLT~#TN#˵DD3Mn]ȁ8:w9T=Q͏:6?eÛE)[+:r_JֱS-\\MkP˰\\f[*\u^+k)L@y 8Q 82*1j/(uV;||Y8;9>Ӧ!ځ\We-m40΅yP[BJ^(>ټgZvOY{ll< 7T[mz#ix'eS 3PTR=|R|.7m2Yb6_-ՊT|v}?N@baRkO%%fW+'uvO 8UZ앯Ǿ ܠkӛ^eJBAog?ϓ2ϣl ^Je~]moG+}RFkn\v?Ćѯ)Ųr~3CDP$p~߹O)nt)ǎ|3|_.FQ07fʘ,uڀokD7Cr7Z>tGbU8'#^Sx>Blr>L-,꘠ΕEV'4J@uca}9.h-D<*b\pSs"8mo~EU2rf)td'_d׃@Nf*zv*i24IP&gUr&ӭ7SPt!Rtܼޕ7sPԵgudr (8kz& EqO tO󠤈H]fZsD SZcƠSbTrKfό*qak+P\˴|y7dި/V 7?vz9_Pp|1<ξsN2*B AZeV&  ̥)nuh!;{60ceRy NBPз#$&Dmou~vQXq1[]Y[ڪgނsW9@M%_hVQj?F"IHe"׹hR@/+?Hd*#rT^ t!@<:ըe>l |X9[]Ѵ̈gĞx.>X͘ Da#x΅6pHPy&8 6GIϭM2sFSB#*5I*n"yH!(Q{4ŭ2#~Fqt:[yQvyQ{^sF)U ;Xw xMB@E;xP5ؕY|v=P=u:#yj`';긒j >y)ar4ꢀuG~<*%ؚD/U?_#G' ?[<O.Xu9`>k߽} Zҵ_L$CiVHEL[ =ru*n2V}G:&?2 KD:tVn3^ﺓ"^J1ɹr{jjנh?V~]1E2dE,jqy8KOFӚ|ͯt| i{t_(x>YB="/OvlW:,k ~l6A%,rKJ-!+ufF;y'|{`ɛKSrSugc=Ctk)bVe*' 띴 N]N%qLTy&ѐ4)MT^zWkm=\3 2Fଶq. 9qA!Պl\ $gt2S_ZlY6,.o~z>x?/0,kOs|*$$w*&˓,&扳DZ.U),&o8A_"Ai5Lj\K\PR_ J3U풤 oBS 5"ZT(@ =T+B=&ݥMYs+6OL1i+%;l2Rq'FBgL/"ˑ罹t U4%tLST9Pkb^ZPNiL"Hſ[OD1 R"Hic4悋ly;Tl{ťx~bȡ; cMbx:JYWZ3ݑi^ Q4̫A]öz W>g򺼯`Eu8:yѽ^REmh2MfRGzj|-%ZIPQ x췺s\{ۈ0\w_(h/}(Nyk^s9==Aa/!PnkulJ^X Wna4x}qВ]㏫q`QG̞Dc-zW OըjuFY/s>F%oH]l3h}oA,V WvvqS~īzw: kr S-ѝ.4k)T>D6>omv] ֔򆲩7ܥlr.Mj.F9\8M'ھ9ve+:Uj/X ~|8+6jݒT߲l̻F]Xtڲw]6!Sj<}[H#o͍bx9 .F.om\^Q`;LṒs+/a-ԡ|?N&4 + CqBsOY_D(SBߠjejgIЖLDZ>TQ p`u.`MB@%(atD+@#w&@)Q P)AOr+Z}7<'}g?':yW7˵R;[Lۣ3I߂amsӅI䜻YRɈ16ϠNܛwߍy.Houk]}) v}ekgpұЫmoi=%7KkmP^iuׂ]GK3[JJߢ7j[ :gFB%@%QIԃ1o-[H&$D`Ƥ[{ ]>P1@CvL@\F)΄DиqT@ dZvܿ8X]Oڌn#rr㯣ʲXs>!׍Ϟ:Δ}S9kə{:agq6(EscT~ƾ#sӋctZ $q YJf-a ~@ΦkMPRzTBL ]LECYά$!ZE须f%5{C k wk3UFk~Q/|DRup ]eOWR؀=]8Kf:CWLTF޺(ҕaze![ Xu'h3ժ+th)9n9p?bt0O3p']=-'po7ϟqdF(!ڝhHf Xug,պ+-}$LCh?۾V *X62`BUFɡUF ~K+%~,]!`ADg*åYhed=]@B8 Tug2ҙ3Q +#$CtUk:CWVу(껡+гc"4]I67Ay5ggQF5 R[xr\,gUdֺIQ_Ү(G_WAQXz&[y?657JVm^F?V5iv4v\s$Κk `շ wq |UU,. $ zS.9,snp8$5ĝ}ֹVN[rv!G>dx/ir^qLrpx4ͯ(:*h(Tx@= GiSM<;Y*~=-S*=R ?n7ᙗwAK⹞xJg{zJMuԡ8gؓ=bF ]1Zo2ҕC+~PZt(]WW/xNJWy»Pц/)!lHWkφfJp">vh̫vi[vykn- ׯy 4Fnt)lAm}=[; ҇s_MCjqFhy@`j36c6cԗ^bmfW0WpI ]1Z7^"]Y/:o>{y7ty#; bS+`A1ZQt;Mw b.Gko+~4[ ;uVGi΁3ۻ7jkUUT醹nfnH^RsLZ"mr}0ڗ5*, U훳JO_p ŋkr1{soxstOogWoyH[cQ·Ek3)C`뻧X|5D6΍ey[2~`ڬ>_zPD8p8hi2ʭe_NA<z7pәђwbFv=rXMPvs3vyNhsM %NgځS^jk: `5Pzt(tOJ+ƛ:@>fӳI3w$/2]8{s7oެ;~'wgK4&VpG#B|Qۤ3v fC =/l2zۈ|~)zg9Y `~^wC6B+|7\+^|zĕ=@i?a^7Ÿ;SzsMN}n=6v3jCܳUkfBfϯfkLV|b+^6|\XQ n8 E ><3y i8}ns9go5og]AR?Z},Ϳۨ=IF}9)[SFMA[DVfkQ,#i{W^:r Iy0eǗKnez"Pd06D4Cc6m@PʪYgQ}h9(+-yVj^_ٹki,M!)!d*ҡa#~JAZIDPQɔU&?YSh*C]ˡEï{! Q"v_^DO$w򧺼8;TZڲd<%CtȶvZϛ$Ϫ,iJ̱6gωT,m(xѬɵ STBf@x1Oɇ9Hл1DD+k099į#Z)|ECPZ備)ՐR@(FDd-U3ŕMA $IbkQ\2M*\ -6r%d)0sc$0frYD.I{V RBvX%JJEv)I6Z*e_h\ 4mSN V" VTTPtB[Z CsmUT7JAn0P:TCl7Pt,,xN:M;‡ q b: 0YWrPI!0ΌXQ%@ qPfDI !-+@T޺2ݙPHq Ls$eQj NOڳZ8w@"=doH_($W* q R ՕMH ?J]QvO J2H/k.6shZ y_թ!!ѿ:hIG%Aࠄ"5%Y,K@Hv'UP:o3V7_&M23u&</tk^LKQDAqAQYDfi$% !pB &}fE<ئ3y>{v)? ?ԂUȖY <]`漄i>00!ƫAy@8m:ӦtdU) rH \ cuLA-ENp`C':!.(jIR|$2L䴢yU,CHttaviy{*EK*Bz<-ex$ ې, }`QչI,TGW>^ՀXż3jۊjQլ$S`L!t5O"3oW#O.&Ɛ)NmmM`Ɓ}~|u:|M{u8ε\Dj+P7] L3`3Fr ac k0Qdf1h֘k>HrDq%RV ]"j31i&rr0-'E <%2` Ar)1lz(CG3{_ю&r9VN 4J`Pj V9$diS镆Rn]W֠6q1lm6`f߀yE!XK1ҤF\$\)E^2`X0jac;JqYQc$6 Y7g11xs\:%X:W (*,mDRhcSBITmLe$R;5֤Y T)JKPKUIu2j1rEB$jd}Rtrg-Iiנ|+Mx57SAj+WqdmPi` NRU0hYSAia/ZD iq#&0Q%#pT5*(=ɃPCX6T*`$*U a3;iS&pUjzX.!b!}N1 %IR8IP.~IWrl$dip(M*?"t+AyVgAA(t{qb\m\m"9s0;:sW fH-{3QǭϋI/><ɐ MAᦒQC|qaoޛSt vE/AN6sgg.?VM:yDRϗ`@478j3]./N^ '_x[W|>v/F_ [7Ch1u9Nơٰs[ 3f.\D- \ d}b#8{'PRwH'apwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu' y!98';'*N ܷww'oRw'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'Pwu'PwX'd!q1\Cq1/˝@zN 卷 ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ ԝ@ޕq#blZ} a' w؆[Qp28הLCQe!(dL 2B&P @!(dL 2B&P @!(dL 2B&P @!(dL 2B&P @!(dL 2B&P @!(dL 2B&d 7ꫧHEGK 5Z\?y9O(ft$0|J\BpOě%@i GD5s56yCWW _ Zw},W̩/;pE+x8䙫rk yߦ.DD@r7?ߞi( ,sS3:r1\ h M#\۾46RpT2M2O/^V8qR"2?j&.jQ?GU *PI5r'Ab#S!LH'#bu$- zߜ5=]&O >|^^_IEϧY z,0QQ"#6Nejd`v%ԧk6P_z[Dz?(BB{[`> `Zoe9H”']ٕBBWֈl-:2sNX]!\ :De<+7tp7A@Kڝ%c[^mMڻvt  IW>v4& /Ja[Nնt(tutŌ=8E`Ž+/th;5iw@WHW`?AvC5d&ҕ"mzl7&t7:=④/f#PEpp|a6(?~aG;@'OχYYMYeb( KnYSe a{,sTqc]OǗC~Q/+e%~+ZՓTZ M{B$N D\rLav':NF,7CWrKcB rt y0 -SNL7gQ]@W= k ]!\c|+@;]!ʾezbT0j<+= r{ Q:HTsB<+,7tpv놲ajá+5] Z_ JwB:DT')|Jql vc _z+FZS`}?EnRWP<:e{<ły ~(/*oQ}2e߶^ƓxskGt !}o! -%t(ir>D\SSfܟ9+?Bt( tute#VCWWJ_ jwBF:@v8f0hUW+d]-EP0 0WyZpn(M}RJ)Qut}V]\T%*Bj?|qlZ6w5A3Ρh}& LE߽.gAp{=B`2hm6Ռ^t{t׶_n ,!U[ģ$4%.lz@?fit{ `5 *?M9:91 '.ߥJ|U:(/HrKuy ϋKQ-*(кm^~H(0B)*=R(ZҼjj]Mc-R(}_~"3yv0GΎ_95S{(K6fϢT','#W K퍫pUC(aU;@WI0BBWVtutέ!#B{CWWR_ " QJpcZ9}:Uٿß_@ɴ.CZd"Yd{}Jb'qA}/#^M%!`HE fhE戮arcӞ+] |#Z)D@WCWZ NGt=++(`}+Di]"]) +a+/thM+@iX:DH]`Q&xWaP҄ox?]ul^7vϛtBGmn(y]@W=5j]`F7tp-}+D)]"]1 W{W)_ Zwbb] ]q+xWWR ]!{WR@WHW[ER!LR)|ꍇSo[wk=:ZCOs8X o k/^e@L/ EjYBzCWW_ ek\pJpݸGtBb7w!_@ϦQT~5"n.]c$;wl'nĨH!Z)zk=z @\'X%=?dJɱS \$_D:8Sڧ6jO-A~lʟV2l0׮7Ųd/}{۷ |-+a^# E"8Ĕ&D&=nJ^ڛ"/hֺ*.\SH>`w~-,R:VB>_hꤩ,d49-^$WQXQbUhi&2+l1NƹIa"'ePG6^jBT]6D uVPRZWXNp&67"fJ\XINX2`HkHgB߿.CdD*r@J3DKer%zީΉD$,\)$/C¤/r89ی<>,Q\v\V?{n[-^ ErѶ@_˕뉧ZmG]ٖ.:vgCW ͽRq]=Xא߂Xc [ i{pmw2w_Nm7*̣QU]Ix{^SŬ66 {5c\06MR'nnolYf_y#mcsF5: *h..:4ZMMee^׹Wo7F[>$YuلL ෴F s5m$<[w2B͹yۣ*7;h0erUB]n=JX s ld⅗.ܢlWKpqn,{ o~NkC?^GiuVys\7%%kH+{M"F]:š=wձl; >c:|i:E%)aU$IR6ђ0J09:+lj(4M$9 Ƶǹ ^|Z3eҎ)]njwe#?.uZĦgɂ'N3.2ħ S@ T7]cCڤ zʤtpcr5ٺwvYj,<"V<ܭSBPWFeBmvc\ƭ~ z q襨ʀa=;Y2٥U!tHȰxyzu<+ RE _'r ?:Z(UͦL ۥaqXD_$d#KPy:땛6YxkZN:<`3|')}dbb&~4r ; .Zǥ=T,.;im9H뿀A ܍OnFE\ӝA>[Flr{ڶ0tiqG>!G_OJEQ}=d!)5|,OV\@Om.r6]TFXBF7:k]ּ m~2\;w j6kO]3+)<]grW Q64r>Yo}#凲\nim3 5ۤޑO%FFn,5>Px ]dھ^ '!J֗1(n;AZzKq-Q ݵ޽vძ'WOj]mٽ}Sݺ"-{z9& 7Ҍ47XGhB :-+K1e"+ʭLKRGa j x{Yy2qM'';իKlQk1<BѸw^Wm{^tړ<5z54cΖ+1, 8,s_<8[8[r3󡭓vr6~ȝ:6e[q+9UDʴf8KˉrRVl^X-sf+̃=R-[Y &֜I-$%9!OIV}Y I E6y ŵLʤlo(7Q͋9ƑNgj,"ܐSԪUwͦ#̡615Nv'U@2:Žq| , r>{UhcW0)+ 5꯰m vI4@;#ެ9q :?g A[CA;< 4hRZR ΐlvwNg Q]`defbh8CӘvh4:494M#뼇љdBkYc#Fr1*'B{aj@)%"e9RD"&KUDJKXdfUm%26{MT6{|};>xmx[47/,x؅=eJ[8$Nm6s[m]~w< ?֊sxtǣ믣vg;ޏQzDf/Gv<[n$? -||f+#dEvpɸG :̿Cpi^z5x+W//K7SFSGm#Q~n P*sRVܼMؙ>$!BGR΀/i hV 'v::AHOZ0=뢐P1KLΕAmq`BFK_0%eM۰PPēГY$ 1DsEҐ0P2h QޢA@.Iz& |IF Ǽr=O*߭VS AIaj80n<| x\xպwIʸN؈7ʸN[ie\gԥ2vhe 23mN'_+βECw{9ؙ֘[xYJ$ԝ^vHe 'ʙpwn m$Rv\.d]LiЖ)f0!䓈EST\aWFNlF9䢥6 VQΒge"f0vbVyqT=d:nsiͺ=ެ~8ctd 5P7I$TȚK"+vR !EhUPIA{ԵCyP^i$\Ŷxj; u*uˌۻDo!57cnV41Ny;:BW'1n#tptptptߨ5AhMPS:^D.Cq^fAG!LP\si0_tAH: !y4יDg !g"/J?ଶ;oIyQ0Dү3Fz[UzimQ-8[ >)`bu2I BʨL @2Ϥ)xK>* O3]l4=,bղQI{ 32& W<V5*J\Nb ,O,"^7uX}4u-ǛzBQȄℊ-cR&R3#E1hPJt&0h}3ȶ*ƚ7W^YC :/h*5nD',$F*bs|`Q'7q6IIZr^!0>6hPDBlΆmYqz0/7<49G_Ax4ޑn7Õ|'K_*ExЗ?/&5FEjmהr?ԺcM3u{U w/{1oWRn}D4|śM'ayHH7pttշ}ryypYQ3=e^emճnZQFPO/H )fI { _7IT@e7,~ zIo]4d\g,Ac7 j a!?$!,wrX+Ž+~bWpqBwWR=_Cd#ouLnzgɷq'eMmҷ{Z&Ԏ +#$u+ 6HT ev_6] Ew,n G ~Ё'k[ׇO;&xƔ.1:;(dk>Us=}zd(V{eS$ST$1'\\>s*5 Q:#;twYWM}x` A5gvW0=#R(rGۧrN{;7ǁ̝́GhƳa29q7 P.zRΨ$1wp% /) Xohw(b4dc+&QNR*̀ZjxJ$k0>'̵rlG'iWd8Co_?~SCgēcg6hE>aY筺k:!4XMR?BR D..\_U|3 p :(Rƒ߯zlK,S:n㱤VwzXYr)W82Tmd&vdTj/Xz,e&U7$˘}$[d|ɷIolË;G쬒&1vi'=w*#˂t1(r9C/nuHPȞDMP:( 6L܎ڄV2 %vQ\մc_6Q[{nj/JUkD"Y%&?Gn,vYJSK 9JXP"[q$* >fY RM*a5qaK 0 "Vӏ}VFD#bi$L!YbI@z"X͢"xstXgJ 8cppƔFƉ ۬4'"! &אLĤjpc>`eD&vDh=d:iɞ(y]\ܮRpεcE"eXGN̰`Xز0 )xSjڱ/x(@ؖr!u5u\u)2ș9tV{.XT4qSű@֑YJ_f%Y! X[cL{M 4,jǮ%vЧaq 'MYxnt-ꮺlQ5E:\zpw>zH.&ZIQŜܡZ7Πɲ1 X~"("^:85Sx{R봜l !;gQ!`}LZf8שSr))anS9#&.2!Gw7{*{uу}BnsP?^.o4TǬtFM'cNޒ}jO RA`d\$!bPwpsEmH2%/D!'#ѥ1Լ^kfB̓Zl7t~ٮ[JP Y z+Q zo;^~k6V;|[A-,=,T{|<{5'?nxRutDm@s!qRJV2@]dTF+cb.zR0H"Mt+pKt@62Vg;2Uaa5 ue,T= MgzHiċˇo;6 ?qzx1~爝U҄9>®BQ9NedYAT.B.'yeݭ sQ0^( J^aF1c@ЪRfA[َ~<+TPvڦ2j{_E"߃X)*Th htB$+$dgR(xҲx.x#9C&ϐ#IĂ*E#aTY 1j2\W[R?3]^^TT~싈2"{D|H%Q4fBtFK% juK8SJxmUD347Ndxf9D `0dB &-PC9_?&jO8iuJkմdO\..JbnZ)8ڱ"a#'JfX0,F}olhՆgCa5Ee< ly,s&&#vbá06q >Hя`UDži3JolfVroAqDy&2qs!C?;hH OHtW3Ӂ;t3 a{tId.x帓|8Rp aR M:g調2FHK "]8㉎'Vl7jE)Y6l^/>n~DW6퍧2ƻ8 r&-wڬ2K^ {)R&;Ht,Hh bf5$]70;8%dT ^h-鱨q/Ƣ hdI dO靄pVHq(͒0x8<-cc.Ϭ^1R:Ei Il S`'U mGQkj)L꬘yQZ-p2`EPEXd7_@ LHZ"a hR.EKi udJ3t4(ir S]bǗy9s]3 Ȥ=O1{kX6C1[<;K_4\ew5XB;-8>fH7mVLekH'Jg??}by$-c|C&9ҏ069ڎrF|>;i8ck؍kmJ?ܷM $7L!_ :kI]l?se1i͒nL ši6 s5]n^0>{0e*,ZCНߝ_հYxI_+07x5VbdYo~1mDJsl1$Lne9Hk}fFn]MB@jiM")D'Ro t\bh2$Xbc_!7jwP+ps݉sҭ8RYѳ~*+,7>?n[-g ߻zGr hR")RXĵؕ"$xE "/H4Z#;Bz po;X_ɱL<~Y-HԊ;LfwY|ȇ9E I~|īZ8t_4K^-+ϾN9/6w=8sw8 cR'Wf;vI" V\ɀ~54o/EoCW9vwӞ<_ ^{<ߎHF:~*iNٚ{bRV:5A7\h7N1F@@~[Bw=5\*6"hi.@%,mFS 9 Kz (8XQ8*1Rma0(c #? Iɞ٢TWBr987uoID2 0H'd) t@G*SI5:hXKdXHPmRFsY[[vf[ i W>>,[γBrR1^T9Q$jv5. 8 !!bB1;q@õ"o(T;x- a:(ml@6vcݻ`k/=>'Jǧ7#mjin.u}~-yH634ר92l2YoEe1&4gC61 ͝ІiCuA;3DF勳FŨNG 텩 !A,k!zngjkZWyთs F"dH ]$I%jB]yG7u*0:26C^mʬG2n^t-ܨ|?6n.j^At߱;=)G_G7ћΟz;Jiyd#%>|7y]WFnWyo7{fϦe;PxWʵg'ʟ +^ @vj.o9t[@&U:62?_o&/" @c6\}Aޜ&RcP}@:GJ`&He4AKhFk$ }O 1YG>i{@JH6fqB6+P&!/ dF FFS 9A=M#<~ړt ?6#9(PKVXiiR5p*-MI 4(!wWg艀gAA׊T1rPn| yE@'}~{ܖ99uytWeQ&.!8/X"0 M):YH֭7y~:'rYI'5PJYowPjh+)KЫ@^FtD>#Œ9<j/))}R(yX4z4*+ `KA$1$I9rF%41@1^ed٭Χ?i[$\*E AAdI9$ȫIz_Lfÿ~¿i0oڊك<ƗでZd3(2,}tm%fn3IQgs#8^alglSj.MD^7Dh~xUVv P{{{{S(]Rm!uZdrQHo^ xN(LTտ~ˬ5TPue 8BXk D% =Er ^c{dzSHR54?{iМm\Ep }(!$N]].&}PCD֠Hb8^4(J%4!.ZΌax`d=Rw,]0CPHu8Lt"cd^ P gUA9lT`2a_wf̽mNǷlör~ /c?ԃҢ aM+E$َ5`Jfup)f k%(%CK!v(#,!KVrg)jb*QPRy ~2uFߝ}\ bDGBFoōվϟ>,{ Z[j@Ԙz!BBTv[ő0+/.}pNAY%*6dхQYʍiE:DȠN.GF77wf'hqAM]OD /~x8ޡִ2(ymCNC)EPTj<<=^&` = U1:~C]Z3z(6-1* 79uF"[p9w4eD@F2O:߯:.)qvdqV!,NH("&(j\je5e 9XIΤf,j5׼wo~hf?j0R/ Fvy횷Wx'^ Xݸq6)xE`|%HDIB{L6gþ8Kg=]{{OU`ן8D=cz?STBzvuSsT߾w~_Ir~> 4MMpl#ܭnl+ů_!|^xs?;ź iB6T;HHj+KZyypuuŻVWf˂BrիQ |\J_'[U|͔[ M$~ z;7NuA>,Czvy& eٻwņU vjuм`*=~kZ)nvGhbWbRCEJ^fů2yzq{ѿ<Xp皰Yz8[(~lfLO+ǽLH]?=V͈|u9&k3hgdm0H~7>c n7)u|e*; AzV:Οyt96U{V)Nϛp>]]y@ݨ-M3xLx;ue/"*о Zϝ~7]V`U}qg}71`d(`,#mVdAbKm ~L/?!Zo 3E Al)av5_K^i!= Pr4litz8:cyˡo67v\-}~5 =ƅU'\IW~X7}iT 55chjV=fU[Ԭj 65^rWUa'㮪v-+֒]Aw +VXk8wU5TU֊[~7V^2ѣ)4uW/TW飺jQ 4RlcGw˫UUɸV]<wD-Z*97讴px*lɸ⮪ ]U-Q +#UU{}?wzCwWU˕]!w~%XYɾXe׵~O76g+m緉LG'.&WO_N3'5W珞{ӻ߷0~>hoT}2i o9l/'"}d6k{˦sz?ͯ|YK9~lXcL4E5Vu_,=Oo U|x}YktIj!f1?u]tC9X%BSTV"54Wހ%jKKgoVkhCcmJSqE+ǸE\pxq{K᮹o\{sst;!E9F7&Kj X@c>!AZ!ᄰYUfU]ORcw"6o)+V)<wU5'WPunq+-+9hk+ tҠN]UmZiPo]ym<'㮪ޞbm|tUTc^[tW(E[-$i?\VaS[HOr>릺Nݲ"yvM>k:{M'l"_yڊB"ƀ !) hm<ϾJ/ev;hȻ貾I1;QTl!D"/79!&Hۼ/u<$@M^6{(HKk,)E]GmÖ0î&YůůVh4.ר|kM60gk\*&模Ϫ"hU$S@0ZU.%FײtgRyptT>YO퓏^ 'Y1q&Q7e9jڳn4.4Q!-`,ne8:(]k"WAl2jfcխ(r!JYɤhBcpU>RIhbDMaun헔[db - ct:&(R*J**dJ d @R~Ա=ұ"?um{Lq5Y2XWFZ=>Isk!&ޞҤ -$ڽeFO *w؝ǾjH@ő 苷|we^|\NDqx!'l_a0a zg/4|=N/H6f>V|B&hYV+Rܷ&z(J񮚨bK&ΔjMmdj[YSM%/wvgC'xj˶|??;Z?0FuRQ[R \SlZmRPl=i(-k›7}v~5isV.][힐+EVo q>"˵K+mVjnio>(o]սZGmy wgg+ڛ_]_ݺ9/h{-~uMó3~ev\sWnn[^]r1fYa%,(&-Rv}XDŽq*z*mwg>9[x`x1E4LO#LSjzL3W«!ҁ^xnŲ-a8?2@~_{YUW1 dVHuh:1Udwdݞ٭)u{d~YДJHH. 4gizlJ*WJP3)@րIB٘pJ:&.T-`@Lh`G3&ΆMp~q=yUZ磏~%zNѷ{գM?=~sgYD+nhcŖ tpDQi@|jDL&wUn(q7q+ ꚥvߙmiG>.ㅛ8ߖl~ZxoZuhf=je]_wEvp9s.>.Љ!EɶHE&K&P~s[ɷC~R{m)P;e)Y7TH@肱1RfTO2X>:Hw"ƚdAeՓ,O(ks >k{F[uf*,>qƋcB&gGX R:c @e/%) W mT:̑c.wo7q TxL1%IzRr5 AAC&PʲeG%jU(q\#pI,3eu6>.fT^fk:&gOWɡ4\ 6K,\e^nD4˳ߞJNxn}InExf sR6I%AS|<:jˁdA92h DI:O\4T pL#y$qڬ|sRs>zwǨ`tnʇ nнg'\˱GD8ǃwhY4?!)_vp,̍-^K+~9҂u&U# $:4w&xϮSҘMγioʓLo3!emc=t!@_lk'/JkMsDU~IE1DZغڐjL@P/+/XXT( q[YQRYwǍ.4fkdYO1YW+y\x}' k?M\0KR4i_iR.` bb=o?fDHKO/'׾GA蝵xI)VR:w~԰g"ozʹᧃw\-눉=~5a>T}R#JJbK;X(dS8k, %Ga]&;QZ}v;ouiG'>ǿ}~߲s)`A;H EKZ{m rh1)hvb\M0 f w%UYO!$c!rT2sLTUGj>h9ې~$7?Ԛ{;X.Qm=SZ4kF} )D^>13F+R"**I9:,5=yoLjjyhW0F(ETѡZpJ~&kKDhyz8nh3_ry).֠LO jF"Zj;e;mH1iw:T'Mwvz"_tbvK|^)_]|o9*Z0>oϟZ.uMS)vKnڞ;i/)ĬO2b.7:^ =Og|uIloz4>O:;>=6zNy-. u{핃RIu->uٌr2*r&sӸQ㏌߲ELӽffB4v1\O'ec5Æ!8keO?_Þ\ҬD0gpvAi=o?t- 'Ãe 7?ǧݸ;!q/wGnY#zsʡFnt!$%ʟZ<=u<~C',Gw3Ҍ <ﳳXJ6*w|UUyzztucha.&쮩|o|<ݸcfE|K:hwjmiL͔wb[/dV9}&tpכ>,Wr_[sw?tw& tlxqĺ{i;vis&|3ӁZSDfaV.#z%HE+`p1An̅OW= ܐJ!2JڌX9XʾTq=KsM]-%sQJXuN*\8QX,"{;8,1v=JzR{Ek4 yɘ~Z6-|>Tio8AGZi0;:ˏrv,4(됣1ZNUb2*2TY襌b):.rb;ko2Zy^ڌ^ ASp>#|%itt~Wid:-$DM0Y 8ȆaD?&tf3έd:ۙ]v!xPZLG=.clUV1V9C1]Qg(J񮚨bK&ΔjMm%!faүc܋ f&"x#K$$%Kd=L[dHl"OwUʛaT<j[7;&:g~坔"7l<  t1tBΠ}*0@rTANT)l(2 yQ4ǟBSC#Q#5)Eoz)8 5"StKvc;e/e`czrAh|L%)%yyH}G<|RqP-o_JyTu:2e()HL({JxDD8JBD2*#,:$NROkP!ǂA$AR- lBjU4X.T s8[M5ʞqX;>ίn(h4::rN2*BZiGL$FsR' f.O95mvC$rg#9YM.I3A%6 DM/u4g7c8maƴX Y`\UE=0*EW{7UQj#H͌N(g@ht`-d@*띆\3hc~"Et$:QyЅx pm||Xï21ӏc4̈gĞ,6!>XH"(< (T` 6bGIϭMRKFSBG4x(#ŇŶacq,LJ@a;Ϊ/ȝXpQ.{$ُ'~ iXC^!$1`(u@d8EÁ&hUKk0@]7f DJ Rr (6RT Ra+.'KwZ68Mֻʹo1x?3Dqh>XUKRu˲1ojsGwf[YaވL-u%ZM_]߫[xgy*.[5N`21Fmyt2JFX sOLJ1]nufau\|^NPWײgW!_o"WP}k{̱#UWYe+<z{vsys.JB(I@ǨJbuсO*` 0*lD$Emm{|\hQZϏ6N|s>.KXȵDHTG%g£UL=ԄקYjUtd,prjN8vHܝt9'i[է *I+LA_Ax;}cĻD?,% K/<+={Ve[ΞObg_%=^LجL=ීON+|#oo^-E(t?R-.Jd2O*o#vVkpgg[<VLϞ=AY ?C_"33!^d)Eצּ#ת|1׮kXrCvV߮"Aa}2s񂅠WU*(~C"!`*O WZ O:b?F-3_6c4z9|A^ߟ#v6EtQ0[hpB`\> RB"LRD*Rΐ)8S$9%9?Irx6 lBC:,()5aܙ BDNH^(9YUxe\zKrր[u>e>G)8qaأ ޗ2`;CWV]+DG*d'HWBPNN!s}Eu8X&[yU +\ Uɼ5`o@Ai$6k(` JdeM#` t+LWh:*3՜MKVFeR}0&ߖގ*vT5PXu0&o/&N#^SN(܆_Ȓy>}:UEj|?߼>k RPΌ.𮌶mm3JC ] 2`;CWHgUF=]=RPA;DWxW.wrv( ҕ!B\U۝`F{zted~!ʀAt2\-BW֟Q~+ "O< 9-]Vh02bXSE#4GWXsp% ]!ZCZOWT=]=Ab X*V2\mBW,/ ʶUFXOWOFu) #p ]eF%+!^6ٺ(T B1ߺvK }=μOc8WܘEO=E8Z QEB !(Q:]jϵo"~h#{/JB/6\ ;W$Y!8 Q u/np/!˕^Gyqc)N(TzP:{2I":!`3YhWbִ~* QL>6{:hx4f ;CWwFtQ~)ҕ̧bUlXg | ahy 2J!zzt ռK`l]!\IW*v(UOWOʺDW1 ]eBWj}^AFϠCWhjt wi*Õ+tvB=]u/(R~ 嶺]yW_[ tu"i S:CW.ȮUFtQj+NCt*ehdUF)IOWO~!zFaɿe}ﬥ(hB ׮xoA(J]ro^fqg;Mz׍W/@ŲdQ1O]FJwyEٴ- "@_W'\VroDu/o+f ƺVu ߫ n'jmYer$A՚Ԣ'Z5alU/lsAj٢>B* hH$(nbT$淪H}N5?HC GQ^\Der@H Q錐HKHR[*=\%U~sov#y 5nZn3_LchsMO65- +YbhS[(΂\ 'PK A a:Au$(v!sbe I ?C:HdsgFhJI*F#j(D;MM]( Ca3QMn 8[bjn *\\\s=-.oM@(%J "d ܡ'Q   IƠMjqTw BƌbLQf1xio,УTxI&yg-*(?MQ^vǰ4(Thn#@]"m@R\ QX@ 2\BQ!}Z;QhUmHIMRV(m SVI'Q = u4q$i_8YכLScKT!YZ% sx94C>Qٻ8W5MA?n;F=-d|)9$[[l3ݧn{nUb|jys+IH/ZƩJj[+s!$+z[t΁z9)!Kju}d SavnE);IM -ڒBGZGWH mFN^Dچ6G;%Xr!xQ+QDdYk\mF/C( I:'͋Eէ (хk§tp}E ܔ43HT.hI[V RBv$JkCvYviF* h-͡]ǯ:mV(4 I&jeW)qh4yֲtXBc ]b uR6 Ņ5 -cwk.4˺ F@BS VXOo 4fa=gAoxPѪ J(ڑkj $*J'0'U6.H1?%؎vok۪nWHFdFfC2!խl\'SYBL dJSJt"T]!z@e gS d+Xd*zbJe5͐jPoB+"X2nP(SP֭:bH9%X !@YP6{iFn3%!2֜An1 q04ƎAC\)L1R"%8TR 3k'T b;XK9(i3XGfұK!\+B4ަ2͙PHq s$e中`Q e (Ez@ߑPI@PSQzX]T[jsO J H/[.A1=#iBU3h_Mr^KYJDA Ք`dVt=A۽@VQ=r+ZQCk>84iA0(49;G{يq)4cMDr|T1&PT1yQHa8IB1M`ۀY;lLb(GoVv:дN9xQ ^]-^ƺ L}6=DF: xs:pv>m:@CVLAG=$]I4T*#d`&SQdv֣2E ) >@E&rZ5ȼ`|ڄLktq;XG4/! O͗Ud 2inuG-A@8_UTBN5~d}%TU;+Q$CtN](I WKh$dip/(m*?!tAygjp N}TQ|m/r}l;n1"]^ja7!kތo(N_|}"uwBMyPxn*[5ޥ_`=%I ve)oNA直K::ݿjyqHamo}_m\\`Wz^^ #"D ҶWRJ޶k﬈n`lN֋8>z1ai $}'uًDN%G* ^ a6>kuR?y'PjN2!}:; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@t99,-^ J :&pW=K'(p N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@zN g3'J;'J @#@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; l@F[f`Tq\"Q>u'ewNg&@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; |@owlxb1m5׷VWgexrs2.iaq pq h{%x>cG˫H ?Rh~X\# q- f_od1=!E)w_8]> V@w _8Pn)^FNWv7ãAU:yn{'KJqGֱ by~w.2?|] =Q$&M HΣX-cyu2۳_vòhş^ qt8;j mQfi2 9;H\:p__B_&p;zr { !_v{ o 7gd9EzzS[mk>vXkexA^@vcIrp Dr46xE{GcL*ԡvo; rwpvpMAOTr[@g.XkZ7pl鷇\O. Cw.'UƧb26r.nbp>vg 3XUtmvF/ VORz?$rbtĨqcz!(Ƈ'FژT7͎ON4NI_EKzO+ _yEjC!w=(F#8:\˾1M;^좫Уߘ/NR)P2J_bEM_"Voe<`C %la=/aW< vJ,9Ka}zr7p{J0t=,W [QL M5kW{qG6lH![~9::=d7֠xbz-?l! ]l.7z`nQ`k˵&XeM9z9z˻͝ yy'͋Pp~ge7N';vƅSᄾʝm6Ϩϳ?5<$>2F'=B!f  :.D^'#˧zh=yn2nڤq)F:}O6T75(NP~]n|O|>S?_ UT }mW7xpg#^|Niu:;Fm}yCe/'ojzd)RLNQ^].1Q믈LW+)~ MF2߆Ioʌ3N@>(l |!ǗԴ%&&퍶j(L?5+5=Ioms,>#>PtJ\pk>$sC2mgd4Of8Gz;)DŽ5:-cs ȝxHfe#Nf7q(6oԂy?mGzFtƅq.tEh7ҕsQ9n6tJ̆:~(gztVrN vf6tEp?͏+{~6r#b .^,n`;Sv_ǺH$;q+^,%KV3ǣOUłRN,)uEN= }Z/+Bӡ+kE@;`uEhꪠT]ϙ؂sW0fKWGJ~ʶЕ]ϭZ_~*:CWtUĞNDI ;DW0E&BWmR˞N$2͡CtUUk:CWm=])ҕRBmL3g]*~FJ[n#z-|/OuR7J:ibG3med>(@vIe :2 \˻2 ZTmWضUƛ Z.!+xΦ+tUJv*(U"]JUlCW5\">SP)ҕeܚ.'(3tU**hm;]k{:Bxސ ;CW ]m+B)G_]=ޜ3fxCSOap͑cW#e&8 m2Е]ltg*hm;])ҕR.+, ]BWTmRN$7vI]`%3tUJ*huUA ZCtcu \*hNWs)MQ)} .)ĭQh9Ov-V5ԲNzƁ4fϭAf~[rXa={;Eɭ>OjD |`Mg|Wfk1}oVP>}H]Нwk+tEh~TAMOW'HW L"PU+;jURS+%iy+g5q ce; % NP ԺCtUU+/*({g++س7>m3]:fG^d~ZqS- t=]=D CtE> .UAtE(a-IVOWCWB.:DW;t%du c-d8 Jj Fue3tUr*hi;]Rtutk ڷ'ӯ/8GL*r래ڭAPkѝ]yWhWҰ>fw4m'{:&diI2|ZK0 _5~ߦ\vֻ~hODe#1rir){P+ ±II36ֆ!ir+Wμ+hM5BA F8E`b{)]Gr\ә@yA _PPڞNmֶ3tU;yah)(wteAv'`\"v,l.wD"4rh6  t~?f_n $b(LBtN2&4R!(ZH H |YZP+0꿿-IFlN ߾{}petg^.ptby$e;k}zv~i{sˊy3"*m7yUMjYчwzj~W~?$2)l|&Ebߔk7d8X y֥]8]øW3t~_nD+vO#\L7}~[AņޗOf8fyyK~,dˏ&7iuwL|똈cb4Wm%>^j A5 lFΣ3&^\AIe0^2IY\|&~ZU+kgB3*ۋqn9h}z6K,,rL)zťy\Ef1"d.s.{ҡ<#)>'Ñ[؝s[e_h@,>έ [2<,;a՛ш/lcv<^>>ޓ[|rw=A>n-ͥ\uPx:'tkk}x} WSwY\q0٣NzċuWՋYx׳Nzj[ 6OiVUn=R';='dl/**]Qt5/t8Gl\ OJVTZ}>5|zp3n_.?%v~@|nuYkh.,tjLӣ4rW׫Cr!,'-TT< o/9}~U8ƞsa9_j^j}\?EW9a'q}СgؽT|xZFx?̋F՝}7Q7ܛ=zޓ/\pa ]xnr%hX#\ꆅ߭ Բ==j6xJ`_?}ٴT4NkMA[Cryfxu|}ZX{fgP 떎fgu`S|ez>]"VBRK2,c9V'Hl7*ܻo؀N _0 \Ji+/$3JJgw_I;uN:IV86JFVf>hBޒs.P2 Z OUIX%u1;PDn̽::Q9A-XF憝v&2 jL-W^>WUM5W_,רn*PB~׿6!dXtF_deQ,G,^Ⱦ-BVHvYŝOi rc3H1iHK9B&SFPhyA P:%>&eJHY8"m mĻP]Oƿ޵H.oH^\}^lB;W)[MɻvtOE] -.CvQ[zHRqKڼy5_RU52e4ʡuAkuBۯؿx1g iPF*y&[r"N ]ԫ> ,'רJ-;t")<HgM@#+!rcJ2=NϬ2ZDIgްJm nT ܭltw: ɀ\qNo&aAmkʥUxtBE#Jgd8;He+!Jm!Y 3>HmURBszMIo1(דּ^JLLB#\,m18{eKRc)%.!%1 )3M9)'+$[b1svX MV6J'n6,G4%4X2b] o`4!Za!;Q貎GӀڅvUl1R!1\v 1]$Qyo9JsByfzkMR(*?UINր6U"Ope,bp=+1i ٪H_5 h*ac\pFwxeCsXeE-*-! g IC` HnJY#EGY M5Y ёJZ{$]&lA=Gkm{`27r18ԗ,Ԧ5yiklx5yF'My_unU$dU}9g$K#I ma\Y`z^}b_my(BuLR4iTH#.z|_e+lF}Ƙ#Icai^!I8iFT$1he 9wHJxF5_ggzdpuc+*_lz<"̮eZIؼӿJQ\:2U2 I ';{Y-l=;FH*+ flւrMBɵt%MX^G;>qRS%HGFsM/h n ި;.IcZgWT>ua/D ¸"WOfT3.ֵt|1fA||͇ͭDtxU2;$S$mL|dOZ V"aՓWjiÁzk{Opۧ j h2>[@ Aq-MTq.SQbce,/qO =1*ʒé F'M<x!ebTHXJ++>!H3 AF>63}b{&ҥSRjD㏥͵A%vgQAҙ1dm\ȜxR3΃Hrg܆VrJIT 'SVkcAe:*ym@q$KbF25Ќ~"p 0suc4~8o,`̻>(0QR1rٻ6$U{c!;8 6=#H\bI,;|>$qHjJ= H3===Uտ9i01%s$PGQA^\)XT4=98;4gowU'gC>/㕫 >(TUEq\O \O><}7UMlܩ޺qĽm`;D/n}[fkǬݝ1Sx?F]/v07z.䁽}z7lRu{,߳%ZzVwKo(I6N#Žԕ0OW-W)o+p>;N A;j|#z"Gl7jyHmE ګ헝a=]o֐I[=+#yς.L;e k*uD Bo#9FUx`[JL$ټ)(9X\$am u1YN7~y9 Apts؅$D :$FAyLjJ\g_DC J9 e( @nBy5=JϨ\6F[@/Px}ą'/pR-V1} I@ JOw9 }{&}J&^Zm@]Q]*L j=9!Yip#I*?y9,|rt|吖-]K o,Z,o#^{Z7{ru,]XS~W5s &MdGe yMwɣ$xPL,AǛ%'blvdll H* e\TZzF "Jy"2eLKR2nlPjGۋɍteiQ@XkKxZRx3[3a sH^0dzlTiQ#\:be`T68WD'2 ~}q-CptT SADDs 9%>(ZXB˔F ]]_HyQͪ\bO뫃=#+us|| &jg}_bzܨmJ*ؼspZh#k4u~}Ssb姃:KyԙcAJ"TY=2X8.bSnT9A袨S8%3xC-@rR!td)O$r<~ DIe}'B׉X0ܙ-H#_T*S00tgj幭kNL({J!ΣVPQbL#n&&N&ce#p;Jp)ņ[2*da18T²P)wD77dH=Rfga>uO~Ӡrj4Ϳp2f(aDTN;SI(Ɉ%@*ac\H0=0< q&W:(ᕈ٨vĄaU HЦQG~2k*Hbq6v`jjUxP5DAO &V{+l1$vJDT(.KU #ʨl 1H*ȸ<,6VI²`<X>D4D%ւר3I8޸ȭ&QGS8Jb^\e=YC QZn x&),gP FE5jLh4Pùpp~ llŨ@( 9K^.n=QJ#ETɇNĐ`H((XmPrrTa18T#9:$Zʛ=zG~SVq(q8ðBxWaR*AT(pD+iJ X%8RT!&%X&!78CȎΩZqa; lwCi Gcu4q୍c!TG`$}]zAL7C[,g Z{&YB`x?BןI>{s{Zfm>~7\;\rm7n pը0[%Aaf:0ԆmZ[i Ri}(xe鋛*i9/ -K`)1 *댩 `\啐H{,u,pP!/WŽ1Ra*&L!!42%7h zI7ZKbYcAGُBvIR1i ?弆8[ǷaާbXj'HZ:B{tHCՎj;!f!ҏ&ӺyS{j*<}ɟ|~L8C~2|s{ !\N~h'wa((jj/)qd-5ݵ,qг[=S Bm y3Q3 mжu<ݶQ_I^N˟ KVPˆVڭlT? ;?Ԗ  ;h r:5 _0xAS6~ g`ۛ@/Q^b:ry:h=K/`ǡü-+Ʀm"WY*]LU}R L4AĽ" iӵdޛw̷]oAY7^1|$ѐֵEyjq_lAܮ Qx9si =k51-TG^ V @*:R&+22C!F|(Kpy|c ΍asVm잱wU!^hCD0G>E"Ӊ&$2(z-¼-z;%޽GkӤ MvC|]Cԛ oGALLݷ@m49ѱ,KtT1QǓ RH"wD9.) "=]p+Ɨ:^r3ض^K˘l ٞi-Bk&Cq?N єjpڭ`ʏM!)Hff݇U+) MR:XȶAd"!$Tۓ_RNXi\ud9EƑu6,(碰Hr$eHe鰔.s'&J9@b.H&!h)')6p&i@m+ՑYh.fYSΖpv\y\-Pggg~yse0Sj7.e7_gqкZKb}5rŕu|+J0j^Z !ϳ2^7o ~n.dCo/o>}# ϑjM`HX^? Wy"b@5xrVqa[5קcb ݬ{w+HfBzi^־6ԫԚ}hu .q{.yr}ˏz.5 #k#3]3>6mn8%Pf& »[OsVuY8qxpn1n$\;0Tl30N:Q~[vRY~?r_fOr l_Fm$5M}ˮ?0Lll[fAZ2}bW&~s!Xn,uۣσx0euGeZC`ϧ.@U fp&fo/qr5n.S_ˎjݲfmzbrLNV2D\/r76ݩ\וz;6t{eWyABIs";^d,`"PNDE{} 'q#E{~;ES቗ACE)ndK,Cb"$cDg| 'SMNwPHšT'Yo|tEzqAbSl2EpE[)_js}VFz!p:y$@jP*.*1 3_T`mz1e>):PBKk1RNj QG6)Q)FXB5bZ&6}ik'kSC]gb0Smmݽ|z WB=4Ky?4K8wm )X~0{npE e1Id+A$c($M=5=UէNkNf)Kȱ,e)9h^ ǫFFO]eqɸ,dJt+!g3B9H8ƥz[,6\6\@$'DT8yaLD^H~A,aU1Gsl~#KJ| Jӹ<ݟZQ-g7URb_A~imר5F|-A?>XL LkO-LO`8f]P|j^ۼ*FzfrR o +]^PͿqjtɬैJ>RArM2g(m,!zJ\qB* m+v**K+ű,zB9!w3.O]eq9w5GJIw܋tWH6A]_4y5 GY-J\2sfVu+Rr,&UR3JrR(K `,S9{CXz*K|XC\PCPmn^_(?IABݓw{6ybNu@dGa$hh"=qFHE\HI(pMPךT#3Ԝr P{UEgỞ?pҸl^vӒ{8ξ1GϽo>9PT{TWmJ}Ot 0!P崔y I%OpcM |ŧڨݤ{wF?MhVMАL)U0GsVW}7eRy QTT-MŤ7gC\ePlb/z]N672U9*)%} 8M*(/Mr"َy4>dyA>*zSh`fpndIgUxPڧ٤=7荏td8{ YbE#w,OZ .K(x`K\"1[ `@tpKRƂI&.`1^z\2eiMm ?Nc.XEeXP@1 $p L eTP=s8K:c9 uE<7WZE<1"c2DP  g72&i% 5mLRb^f @G,w)ȱ}%s![Iڤ&.ׄş?q+?E{;q3Y ݻ0ʄOW`| %1'ҢƗ `aEv.O,"q%K"B Xa&" 4<:bVe, 6v˃R/2!JԸ!3MJY@(5" 2BlM ҆|Qcb3ϯF]=l6![6B }:nyO ? ,thy{7Y9Q1(1ΉjKa#LQ$ɶkRq`Ubh "`"Jlr j͘UZy܂⬯8~wu#ĊhrELQk.|Rqy6) 7ܤj#_h).{N谧R*~=1h?nfr$(#xp|XX+՜jTҠ=N)3OHR7"n&cj7=[' I^OWgjz_d\.F[EY^ &,m4Tr0ܥRi9yxˍws  "p3^&-rZ;ƅ $!8l B ÉU@H2yͤ6h͙DLI`hwjGstZo; 5qw }\{ۼӿ!jfVę}XMes5տ]<2) {!rǓD)@$znQN$5>X#Ut3/CzV|mi)Q1":-dESӒg<ZG:yVBFqû_C[voa}K̿iW-ݴmKWΏw^e 3*\A;ߤw!w]mһa>>\ݽc]U;ͱTW_*dp1b5+*%ݨ%ozULأڮj5L7zҬ[eA*s[|#5{W 6Ikr^}}UVlJGrEiZ%X>aEBȷ)~;e6Cruӿ=zhJ(! Kpd" aj 4{V\0+gϢ7^^M&U$E'ED㨳Q>Q֗T{}AfӸŢ7XG1U:^%և`g?<%ŋ CLpsb,B孞rCpMre7V%{,_%.w}THfd4 ft^U{B# ѷv&mmCt3oA?W|Sl O&zJŦ4ѫwEj&9H5_үUGZ_-d^! ]fG?.buoiD 1wyC g~d5.4/iS8:LG@@Ѭ'.Ň5sGpO:QYE^z jZ=SRG:6a3mz>wTU]+[slsêuݵ,DrM{~٤ƺˉL I0D^$F( 8Eh"FE֦ yTu쎎, ~}i-Bq3/cI/)$%!XFT:jiN(CNZ;9V^@ޗU{,ƎmÔ".[ NZm.>C)"aP+ l4t!?;p CE/[Zb?\zwA흲@-<)@H7RO  Z0䝵Ax'8㶆*sh^;]bvNWr/cc<$Eko/LzwCb8˾,?mF,|jyyq4A*5U S c4AR@J!)T) Pw`w8Eѐ9U~Sm2u5ߌoQ|/w?X7Ês&긷3+>]6wTh_3A ;TyV FqA2kDD`!dD$ Y%$$KAr.dTzJOY=[".xbSnTpqQVd#Ĩdbg18A THYʵ{:Qr|TArkleQ,wTi?f HT:I&r!Eg> ds rJY4.vhO8D$LgIyph$qɍT9:qC g9(}cunMOvuGR+B>cno0ع0ŗǰ#99~r˖`XMdU~bv[JڤJ˷9Ou TL+ܚ=2j[aP&(}X,zI{%qbOL̤FC" ՌښcY@l̒ R=)`켈YIQs-sPIIu;#gfV.nW.4Bƒceһ)d:'4{n'ᬞAϗo\cg cAWL+F9XD I35C.7ynOf̊=IEmJmV^h&ce\RN2 v}295 <w쫵mZZ`7F/JTA%6Z")cms;< i'<6`‘F&!iDU>5bgl׈HԞd|qɞzxzX{ցd#raY,>Io]92g,)xEvCѱ>-5r!"_pdB`g ֖Bє̣y@n&i+HpdR~Gq_XNOӗkZ}o(P﫧N<, /Ծ.5l}~x4O u\Wo#)ڌ^6W7O~\ZJVqf2Lp{Bo|vjunc"F_?J˶r҃[5ޥߚ~>һdro~Njr~FƔ ƺ?͹wc3-k4SzoѮTVy?yq:V,j׋ݬzO.WOҬ=׾vԫh5.cxu>ОG]nTS3H6WP ܡz/{rw>J.sÙqp9]y Xn5s"Vk[+fkA%ae¥'lf}M\Jm ll-yxNҢqZǮ\;S~"p_~Jx}3No.SIns+oI"tiX4Qf̚}Lc -{NʹUn֢uǗp;0̚~~kLdMšNaRV׻57:>絃1VVbeY-v>~[.Ef!X"lɌg'sy]D ;K9M{yEmMg^v9'gf('!;4 2 *MȘ$SpH{ԎaEZz/[\H&i .A6fA@Rit.$M>Q' cSEns,ҝAfU?M^RGRtIٕ+"br6Ϻ~KV0drJ|ȁssB{n!KBl3r o)Rӆh51;"fܠw囧ro1fL%c00d8*9 V2A edlB]AZamͮvq fX&qI:O9vN&=rdV I,lCs/TGT94=*G:V:`&a62IT B'N,)%*KFaW\{˜*qYd.s=:&գ;tFΎVe[fRk}f<4^0x[RMS+cP ]ܰy= n܎n3RK?gi۹+Fץ,9Y'VRN|pBU lPIr})#)>H $SQ2ǹVfguH[m0 %с}>TIXm(\$bCpt^H|T Fځ`)j.)9}^>&ϟ8Kx$\x[jR)(Udz/G]jU/0ͽE#%Ltn> e;cmCuuTZPK,d+QVY^iōg+9=d|&ԚeZެV~0G]۹%D9˔CmMAdQFܔ%1Ι"ZEr^>!B' fVW}|w~<ds$I8R!@eRCpK]#}AGDp[E r|)GÅZ+hPT4 +w/H] uED:vuUWWߣR,px@~7XF9-+WmBƒlFHVt08y伀ii6|Nw0 RSdYqaܚg㾹>jvLp Au! i]ia>Fey(z_3runLyd|eY8?fnQ}>ow_$/I鋤ER"){3I鍭HJ_$/I鋤ER")}HJ_$7ܘN6FQI3duPaيsd3@m3D}Ib/xI8%=i.J r0xAL60sxgU 0d/]63?M6YFg'keVЦdC-=كH9d#(AəIY&;G~m&{a k¨`l)IK&% 0JoXGAC屩 1Ngi8>uja݅k&9)M$] ȳ{+(Y`vg{FW!aa ;df p ܌3FINYwjz٢$۔-;b=,UUH  OG}Qc+BC*Ibqԍ"ԚEV1/ PNa1n-aj 4ԁe"R2T Ra+.X: kI `Sĭ3e&1xr$R~VNbޢ;2'QCԗ9ͺ4~ps98r1x|.-s* 2hܾs]7MܱGcb(|֢$VA{5Ҵt\*)B(¨G_Z}Wko9D.ϋ2ltn7o9K:}Y1N/ԃ8 >d/!ĺwk1ϩXk'c1&l)Q FGt|D94rge:% "!T7F+B7g38xݾ&=`v^Go,ha3/ˁVO#yBi}ݔj.@Qy]*ʱ$hQ"mG^O]y?qDQTEq\1m`L'%;=ST*ArI&"#8&#ۄ(!1Mo3THY"!(Z&g D#;޽茝=wGtZ`6v[GҡU?ɧ/`yGv^E aVffwnW4fAY'=U4݊K{{3iXmǔvRl!~ӗߧlb,uϭ˺v؝6;N ެb[lݟAyx7&.啖d&]A_Xq{Sn+[8FzsfAԛ]rnMFLo>!s{u~,0dV6TB2RMK)r913e‘w,! a j]TXP>(,ڱIN#\͖i)@|b**K*ntVC3vm8ˋ3(۝ЋO|אO~/V9L y꓌DhnN$L䎪 EFӃkvE;^X]ƪ{¼ s]ol^< w.&[T.~~K>%a0 KF94S9^"DʹJ&b U!\)chXB1&Jݡ(؀Z29-KA95'0pD2vD26>׮r ku@p/x ks0>R8hP`8sy28Yي>v FLk^ K\K]pWwU :?;0e\!T \4J3d68p_͠F1L{ Q!DAͣ#(bdQ00A>5f݋KϮ>fE6:xU鱺^ݛz5K,t(J xNd 1k)1D(%:(Qd`o($52JѵF.&oG[\H*x- Uv;%x 8Ƥޓd^f0i+zӯZ-siK $"ÏZ:YBa}8{#ᘑGo40N 1G,$ R ,$yÂQ*2)<Eg5z~s(=D| nYz?en}NO燃Vh~ ޤJ0aUZqAˀ..]QcP 7;gzpr3{+GP x>-oN{ %M tJ9ƅ$Cp؈ib }7'G.^lI;4q/S=2:Yw`eO OiMH@(\s- Zרhɦ!hHxQ﮹ }%Ekta8dchS2n7E3ّԅ4Io4uNYQ! Af\XV)j$C(BJBjj8q"JuB@SY0 tH10K+Yϔj"a(2yGv[oo{ &:ihݤ$Bx7_\T UnYs\UcfB`LC(('J7{lWm*ƀ|asȦҲ†_SQXf6Cr_oTm 7kmHy;vY6oʹ)Nn_=<{D_MH/Io:Vl\ْT#s$ࢷX fU;㧕dتӋ&+wEEhbMjGo;$ 7?S'wi+i֒G%/>J:ޑN|vMG^reJ+V=*Zf*.0_:!2L^Hm鬇]8:0x=(سG}͢T\x >O_snɭg\Z l k1oy'?:R: ġ%Y0.q09.*E "C2157`"b xCx&yg{[2f86yUdd B= FR$Qg6UHdeD O$̩*9сW,JsH\r.hSx43g2`CXD`5G'h}b%I`A!#QˠM+?''L"LR)1C(+&0Ε]°'_WW:\t &5sƵ-ÖO~Z'@m;7,̽œƴ_KYI7XF:|`-* AGDudC$&'&ӰD)$B"(EP֤T-ʄ "O s]D:HU% h,C/(MR +EQ;BXcӍƤ9I#tw%QVu5kg(B=b!(PL+XJ)Ezs-X b-l%C","u~S%aҞu֣ uԪy4($C=: F%pkr&r!m:mΔpIs (UHB1K(-R$f3sO8 4;eE:f]ZW)D3$Vqt f/W]3-e.']}9f~rp}"g'/WMP,>umg>9϶-KV=zɝ+g+P}璧Pm߁eФB bNFHvRecJ%ktfPm8[R*!'ГNT|vJA9iIJ?jٍJ ŭ2Jt YA)( HQK(BS2s@׶[Nn%*PPaS ɂOpl'/>7%"u:ZtͼPvQ=0ح[6@fQ)S /rٌȣ HIY൑ dڷ.Xi!33XOq\ S@fZe82FN"rPa3svaԯjV`<Dl"on@yMhE֔1*&} )s'1LTD%BN a,rlyI_&"3GFd#.F"+1B3 ̏:0.VOiq.ukRZuybp":!uC%'uH!\<. 6C'3a;$H*s5,8Nɛ'$ ُ/(0yꡤKz7.FnJן. 43q=,=FLM'9bUM]GNq>ozgdyU2^Jo4 qcwn'[< i^ S@^RCmFkO߬i0x}ݼRb<9! ݅ 5:0pp;:!r=ųgvdJHRcB0@tF&(UZ!M>yas^xHtdI-rUݪ6+=ũ pz2JnJ3.$ݸ]qr #F3z<q+JL?囵W; ?ʐbFb"BE|a(WQj ȡ [4{XR-A~[`9B_?IqFWW\չ Shb4W)?r/i)#r5jfu^Yy/6ߞ0Ճ-y'oRS~b!xx-(ՠo5[ Vՠo5H V0X}Ajз}AjзlP ˆgan's/5n?Z2}`Q.qћS]dVx qN_-AGhe3Ȱ1*io,Q +b! NmDQa[z<^{l8q_]g /$8P>ߓ@gnpˀO?L']} 5AdЃ֩:IP 6y @ADe?@dD+O01t*ᥭ5:gI .N_W%cP,UIR  !dbKlpDE 9O\_N >Cg7WחF;5n+exLm9",Ѡk_ ټώ#<điO.gT{ܢwNJ_ϩ͐ q Ye=&oژbF&ZHF(C|Nt˰db?en{}Wq6Ω3~??l\_Q3YoJPq&Z V&qȉ*R%EɒGn(P}y0 0)>\4:bbj0 ]rPL)p#pBJhV XZ,;E$R^d̓S<Aj)*-['a43 UB < B_}fZOZbehl-`MM>mɧn7ƁhCEJI*-)g&4Ld=bNB%gSA x4@F[Ce X#Hs6$lqPZkQXt Ay*]8~nM`a慑GY #xf1R\0e79ISèo6OP롹}?XS cqdhQ{֑em 7R x%n6\`|?rt_쉧G6.W}F4I?dd1ZRa5RZpLI^ Ԃgyt93qfN~~N:O?#2oj8N?>hr}!/]H{ZSw@-mȄ׿ us\p;jj{E%ke}yFMoom4'̿GS6}!|MI)fԚUG|?\6&Ci<}Od{5KW~7,=ү:s_~eOO,eNqY. 1loZxu~ A~|ֲ6K^#-w{kбE^!>^yMyMk]Oh9ydc-[CyZz>2٦8 7-Z}AWfَkos;]9֣Dՠe#T5dUjC-[v;.muB-KpzL_*|_`}ʝb5cX9tfs\{}yaˬ-HQ0hECIstldZ1"$aXuix>;,!e 5*kiWLQ %ы|&SYN:F9= <ڕ^Xu©lټ{ERh OZ}4~u%4ؘ\k^T"pOk=$y>Y"pv<[U2}T:ǵ,A9lUYR(']eWP8Nz[ |z=Civ,k<}V`7-*eS^SaA?4|'9a=0rz4m(2y?~lު?g4ƛe7a7ugkvHܙF1S@^R9F_7oNcEw\JYfL\(tٻ6r$۞0d3(#IWnRWdV2q7]MVE#J\ X!XJ>V =%䣕Ћy0 912FHC3G̚fҡ[.J'8OL{@gYVS>f q&w\l-r#6?Q=『qƵ_&6ft|:Aj, ܸJ&'44o0}  δ|^L!h,aùb "D,:HYr F#s\p,զ ,)LYHv ySzb>DSp&٩2*:m v:o5PTQZI3Yl"6>@<P}N-9S< 9"yI# Ry$K9BQ٪"DpE="Deb\)"=C5(LeY+w3]]o;ML`١ 2FOi#Ĉ ĂrN&J%Y[^ܐdPg'ߚxBIHZ:GZe.>K<Η yȐzz\a58VPOʅk+vu\IrDޏ({?.DΞtiU,0M M s!t0;=%Pv < %HZV3-G9?D[2")!ExВe 5B6 S1mMΣM&.R5KP)I턶D., **\pP4B{K\9 jj襸Ԧg(D煒В҆Ht%[mXO.M!@2-k=Zt]tHuFF9L&Sb" R`sfStc'~ii):]hf^>[KK/KI k5VY'l78ywl!rC#uGq_*-C vwQ}2aO-?&G$'HW>F'P$)ц썕 GMK緓4,M4䞩0Hv$8S Ʉsɀ]zt#eHLR@:sBYfL@м$,Zꂼf,SOJcnXZ~#k% $Ogo_G{)92/~ݥq)Ҥpq/l..5MWY>&'\V6xR^\,]M`/~W]^~xٽ骂D_F}Q"#IߋŕD:ϯ^i8sJY&{No. ޮ{$F/g%w丯\vcK_ݥ~^||~?Yf~߸)Auu~܅ 2]0h`Va2^I匸k?džV.'ltsEM5}du«3|wCko;V{룺*WG=|z2je;-wϥfϪ]N?`o;΄x:!ٹg-/mO"bMdpi `i._ lDs1pK|^*k\#kֱ[9- 'ɸo?O GF8]~3-0j%kΊ-=J`m-똒nBz6}y'~/ ;/VEǷdů4`-ɗV2y= ͸Y$_GVk~ / Lm|8[r~3VВ%MNezMq3xۙᅲӴ/9zVi s{/ƈ1βQ 2D:ed*m%S!lco\N:-ڴ <8x:撖 K2yԙ;!lL]% 3Y =J;OU9:xHg,Q9館ǞR/}>⓴V .HOイwdH9̧c\@Å˥q:*E kxc^WMU oܐvM7Y*š\7RUj+c}~Pi*QV{mq̀E t\,)ϸғ% *Mmq㾁S-l_֞9MNهл>v+[,R*{*焺6/yUULg"7L^ٵPx~s&/M^<94e;?tv0=?9?NI>TrAJq|R@#R'xMrRf@Z%)Vuj%}-yv!g]hQ](0HWE Z6g ` Vvq (Eh gҿkPi ӄ- 4$fc4ɢ7<)Xv Rہ]}}3ȳMuhjMoUuvt]z}mPZ_Z7]pN{:]ٓ2ͶCjPEUû5zk> WZt7I>( 1|_u9]xW0TEC -2שYo^,ƺS `?m.mRTvnRho}#`>&r]n云()H?nH7#x=Hw 9EA$ёq.Bdʚx2d* -"&9\Dp|< !*( 11g k̵K "gs2q2z•,Q`إ2 b SGf5` SLh bDxUZE^X  6Vy( >ـ,7:(q(nѲ!s橀>s {}'> xh&l9$iLl€ӘKYX̊mɍk&a'[>J %]R*f] YR “'U]@96bHlmϧG)],u|2Lf{jͲ~/V9q};Ο&/˜gSN2-^wuqyS?ʖV&HkVQUawjcst,PSBbGCduX--f[ONu+蟚콽×p]0,_>jjt(=\nI|rBysɽ݃om #N'W7V&sl!z*}cw%ݷnoٹ]ݜoj~r|Av&HVZo$a`qr=aD_]-NA+6[gn#w-x!pgcWrД'A8_./b̿ew<%hwpB%Mnؙ*)!$-ZNA]l[ rcu7&Khuc# 7JPWqSOaE jـd- hҽ@rfG͊ =WjjUY:]ep|u5tZjuR쑮T=]]pu5tθ@if*i5+\ ] mW0P9U ݚnc\ ]_ ] R⑮2tEzzet`"}k#,]ֹP_]tEGzSK"EWZjvtl ЊjpY ]^:] tut: ++f38ຸh/ʥ=woCW| '&=5񕵑#|{[w} c~}q?ƀz6V6A2s/,qsG.IǧN<m0Amqp1q%MY[x:c,k2EW27jUP;N9&YϔMM h@yoHWCWJ658F@ ʘ:D M`WW3 -@ HW""] ]5hmX:] ,GЕ}Wƺp#j?/~p_ ֿ.a]#]=uz"#W ZkNW3G:@ Zl\1k+eKriЕ&zՀkWZx38P "] ZZ-7|j©\AVa=-ΒYM?h5,4}4ɓe?u/f &ȧi-Pe[05!y:6.z}bӚ{7scsۃ=>LE(ȣ0]Gpj+h[GK=xxMS*lz. )Vyt5P##]]) ŸՀK%*x fMA WCW_mP/X Guut| f+fY] a5sWc#dӑ2t9Q ʗj?j_C^&PY31nrGzГQ"`ojjuhENWѯ[;]mƵ@ox>] G:@rfŭXIWCW kyt5P.m#]}b17h2LƿbYcx=%٠=m½5wth}7zV2`V2\])hUP9P.J rdW]V`GcNng`2ŗ馼;~ ?!~o>?/5J`As8{ȷPOwK \An~\n~Wf~[z(9+CMBfܙ(;OK!/tgr}B{~>Fqut3vcf`.6gok=\NDŘH-jrLlcGaIC#;iAR+k" LĘH"Xtx$ɤ[-hsbtD1h"rZ(\ #s#fԨ\zNѥ(k~-6͛wo!bwRu^"\m$.CIIS҈S3LjDh9{ɌazB4Ccv)>o\.蘸p1-^rJkU?{"̰v^luTb!J{Іtu:V0C4ي06Yʹc0&?A}4 4J\%vMmR ߚ EDπ@ЈD&]n.Si7 -^<):fDyKƜć9~jš9 FVRrgok*u!i(%%Qhb0{\5T3 <}Ü|X"IG[$B$Jq -JI!!QQ-~B_[w ҋISh#X2'Xs(@ul34_d} P< DܬuR\`S5*Cߌh]SwZc!MɁ1f"Sk ],%Ģ:D{j ޵6TnG0Lc&)eƧP4Z d\S 5<#BEEG | 4@s8VM+ q:k(QN[<ĪJ,d.ώJgǚȭn\%OE"n%l0v,hԄ1U57k0\:q cEQ7f=a=g3 *T-JջV8ihK'[X]b :'ؾ]|*]3!\-".@60؄W'#`AVs (P2o%Cez|=clB1K.@@7w-V(!Ȯ脐;>SefH57J e CX'P ! AH("2%+4v&߹E52| ߚU$$X,xϚ`& AC\6)XKI!0΄W .a\+S r Q\_ fU5Ttgc(QhWQQ& gY6 RX6[G@PS0jlݕ(Ū&LR8FgU]%ѧVOcĞQ2/gU˙WG( T&% p`l,@uG@H <|amhCk4-d3WA[twhw8f:G'7cEMMD#Nȿ( %`b=bqv76MڌiM<w{/SZ|w !I|txK̓K<`%B[БuJ"g+:\e ,#jXGbtSdzxA-йgǺ:ɶT8FӂМNYhc޹͆ڸ)XV=ԞK 5BL@P-]{;Bw*mvLqMklpg^qi` YVm-޹kiU`&%|3r1trӓjk"KnIN⽑wJ$vLڰKH\4amO0lܻƷf~ۺٙC+:Gl y:_=wmτ\%+n'O C\O" z1N ĕ'*)''@k DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@H0K:*L 58{`t'P:GN!:$6"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r }/ \Njq EXhN ͸1"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r +w$'+]p* eJ 4D'1Zr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9bkN mgsmO|ݤW}fq5_?b!0%p5Ӕc\\_q2kK 2. Y+QCP\!-FV3\!0$W+kaTa +#W|)re- er{=p - $Wl9϶-F։媳(ʕwBڒ,Fs6*\!8r,z5f urN=ן8E?\m]+ErТq +B\J+U6wBʽwȕ+W\0]\!.WJ\!&\Ii +B\W`h53R YI#WJy)n-cj ct{UBs;]M.]bzѿ3oGzws3j æn?|4}κVaŇ].pKy5c -((gov\isXZs ʵIg?S@T^^guq5&EZ/!kU ͫ>W2% C鑙_,{?;% ?hvzy m7ߴvNbvV)~O|7u]"JI+oD'US!?]qZ{녭Jy]+uGWH)\Ie8pu1~Xr{g\ G_Xb q5+ETS(- )W+!50 N;v `>3J|CyR_;_fșe4֘R5Y}g}?rN W{NT9vhvxI uM1c35: )fF+KmP+X1SIH˳+$WC+ AzQ6 qW6:tP7Dr ؋b p Ҋwm"$W+o3 `\H)i#Wc:3y{b+~2reIZ}c 4*U\!aRZNr5@Z +v\#W+t)rJ.WH% K+> )EWC+{XH(WCɗ$W+F:\_\!R涻Iy)HبrW3w.WH(\ykY΂>!hMRZȕ;ݘygL؜ڌםxww/ړُg2zȕ#zh Eu\#W+d)r.WHJ2%)Hb pթ(h]Rsʕb2[\!2bHk`pr`2_3d#fZXIk8W^kFo#7Y+3֚pZh8ДsKϺK9)Ją`Q* 7NFWK :˅E)^nOesQyث˛7YC^ZmvS>z5Cu{a\EWDWG?n-fW+^xt>~dTй6W\~r'n^tz}Ƭ$/c`I@,tO$blq*r(e˛R#v冇&)Gm v7ކ e`FF̘ڵR5n9r9Nҕp*5w\$Z1h)>}dm<lJ^pUӈ5XYKkYf=&<;_/^ýs/9ߥy~_6)D??0^VI&RLZ蘂^5C"nilvgj2] uC kT db:7{ >`-i::k7sF{㳋Yj÷jm

p@p;싛=j. ޅͳUji:tM[mRhBg.+Vղg[O?w]tw'HiB̈́k9^ڬF@]>15w0(ֱ0t_k) > \pUV_ꝊaJAt7D˒.f[X#jfHJԎ+q'+1JF t Q䮎)2l*aZ.AP̅2h8NN/TŇDh.w[/e6|$&Vj}y϶{POűBo//IC:h^zHfRtBK]תMjmݺ;ETLm纭]II|[' pm Z!tcdF]{oG*~0p8 =llbӤ!8W=3HCRtS4{j~]]+,/[/@ofy;4Z]{u3L_D5cS4Fb,ɲdTZܲ}Vkf~Pߪv$3Jr\% +G4$GlNZHyvve۹ߩeݎWŴY;i;˾Q} =Ơ̢KʓɄÀVD9RtJIӏ5jjNf5؁-vw"Zu7Pӏ=3Xeq $K4)F%4pÍ"/Q~GMH\0b'* =|iPZk/Ԣ$(jZ|YBkPx靳^Sy$Aۀ3#Njzmb I̢fLɹ+괈R_L2X_Bj#% #QJQin ?BR 5z^Yx/ƩWʅ $ Vqaf `9梆xx5^wf};Ov0K*ݲxa~].3c}Sj燐>Y'Tjs8D!vʮ?O}yŵm%/mo>5)z;k4 Ťqo̐;7Ec)d9n>oo6x;^}6d3PI# KE-G37tu3JdHuIugp8Q֧iy̠Zr;|ii>#8x(vȠ?.RuDc,];NgrMwxa(~dfJ EtmM| N0-.<&<;hXnk"K gb >ގ$/ϜNsӭ? e1\v) L&T4=,_N\Rgj8J_at)Y+^K):_UWLJ%"j~s=IalԒĀUXY-EaNeasʾSmRBBP$m%&D iPHRXEd*3s*hս)?O|Ө8 {{ (~HTOo8hGϤi%tzqZ71ӈlTiI(P c2QJH*T+uRnPyC3- +:T=Dc)HyyHTYqB6%P`LeDjzYۡU>yj뗏oP ɑ[1ϤbZ)<y^Vq&=+ؼɅ皣4JWs,ȢίS|hGR^! yr %EUG#DžՁ@rʍ1g<]T@ 1P ༷T,=(OJԛ=SbH GR _fGt1 p'coQ>Cy8 '(!r!$-h'&I@bƨ_x.X?ED[̀"x&1!:Ø$@@:s鍋jud =ЊD)UΥX5ƀGmR(X8vd0(9ВfBsQo#.,YKEI$.JF)ՎD':w$CbD ڠ! px.xX;CVqxxSZ-Ҭ{w_1 vu\v=VBƽ)yvQ}6a-?*yE8EtĒ;s.YD@ *0#iZrh餈VPYZHS \Pg;͂r. K9$(A*+H9R@d )E@Mr% K2k^G4 AslXdR~#k% wi|h7ގ&k Cs~FIΛ\wq/l..9NW^>&-Wy'4˰W9^4 DEMR7Sr~qԥu)rK͇F{f=:%FM5Aç; &{C9*Wo\mc͛V?z;nU=Df5!!c=7w\ҍ;'one̓>(voׇcd WwRwY_ڹws㲦>t}?WoyW;W롸X=3H>ʷP,(#߭m6C1'^4oo63h2]{?FvuV|NcĪ^1."6ꁰMRΕ~I,O8uYY;5rY=L?qK4@ -ԑBtR+e< (po2"h%TInٻ6lW/{ `bqjqL2IK>4%JN`YvZ=oijc4-Osp wIk<)<1o}. ;`h~Sb]͋azGz\*Ӧpgkg5a3Jp0ܱ'q>C8)[0b9V"cCBcS;kOc7[7IkWKKhv]@GpҲ3 dthJkR6^Eד] b+&6Xjģe1j$*-:A>np ILgH^E#:Ƽ(D2hB2JIL֭Dqp !&\E)FQ), qD\5(/m|*$sZ\xP D&QO:AJg"(=kzVq2X1#C@p 1jPk. Z(YgJ#$i[>Q, Xpᭆw2A! sFYm Y)kzJsz gXP'ڔh=$2jEJ8Α>,DG7(ԛ-b+NܝNSzr!Z%k1$QQYDw.5 ccM6ݡZ2\DEs[B001Gòc c՗Ρ=n jkyqdP7x@>X?:: )0F!E+%qq&4A;ǝ 5Mh6{蘢+-<$zHżLb1BmkrDQj&&8a.xؘ<pF%R 69k5IYHmpT( Z2M v{ TIG퓮/R Цc!]I잤s`ۘ|ƀiOOMHt,'<}ﭔW?)W=L{[ܽz[y6?Q6^@xt%˦m ; ƐOݮ5>Xm|q׹uNd\30CuA׈ Dб &s]E4G}w¸K5+ټMvr959uЮMecD]Y u;]dr m?`,j? ɶhEgkAqetU+T[*UtQj3+M%&- XUY[ *,NWeuUGW+zvRcn>rpjLJW;(t#JEW oOW]SD q|u#`ZCWm+@+l:]ewtʘ]<QnhhsWXbmohr_ҵ\Z;}h*QWUv(C3#Y/RLI ;_Y,nυ}L?Ί}6ȉH _ݪ TO4g߳8|G?7N!o錓ɒ1+}! 2-S3C\\*Ze;*ZKMM%Wh6V<|qqדArt\_ֳ1$&䯽uy:oʨU9zャZ+v9L}d<+ PC; B "GT?(IjV_mPdͽ7Ɵ;娥 4~/b(N8\r=G0/ {^(~ߘsa@ 󫗽'gg^Eo?)sJ`$~wb;q2qךZ VbbuRry65JYM$W&as[#!|æ)갚ct5GnչtT0E46pyk2ZɛneJv3̈́i]e=SI.WmVUFHGWϐFi[jI{f5 u-oB]F):uJiNڡt?Z]\{l4(i9ҕ s"ʀ i ]\Ҟ6-1MCWf=J^ mpc fsW]ilYj' @WUqf#J0uEv+վUO EtE5pik*tQ6m#CGWOBWe"JUH[*զt(w:z>t៪M &Op&.pi ]e 72J;zt%DՁGÎQDp>)Ur.M':u>* q<[~Nphz[F)r-,A9 J9ؕԩgl9-Ro .oQ*t( x*Cb-u +FZCW.gmV6((ҕ"D4 N%V vC>+M8i 2\hjGHW&u]eFx*lZ+eS(N =a#;R 8OPʆ-vtw3F+"Zh0=wi',fο;93Onz so\H(W"CJ-dF\‡>O#|P_ٳ^^}mu:ͳ]mȯڃ L`T'%;=UD*d4㨌nc σܿAEpl.ܭ|ǗB(yg.j]'@Jے ^.JIE O=dATD]w+]BXmu;VEw.#ƽːwݔY8--sN~eͿ+WP# p~g*0^@q?N# l/5?j:_R!8kքx]%8߼3&H e0"Ne~ 0?|k,.=ˡFp)- axw1zϬ<7X}VH@AMA6=lr=z(T x jMi4\+Gò`fE+Tl;SY ^oPťiޝ,\B;BFGvXoY|<+.6cٗ%,(ۜjΩ,^PH|6U'}8rv_;#x ԮA[/gMzw} /A{GYFVaSPSnQ>%}N~9@RI+SuK2xrvq2YZ)PL'dֈs,"'s<#Rh)riP oAݕG^Z fŸ`LS D}R' N٘Td:MS Aúh92jIHEe --m|dRya]w3k%9wDzËmDLOu3E dwKp-5 .RmZAlvUtة## f02B!!7Ay4 A)'JB ^O..5IC=D˶ʩ|||%I #Co=wRƑp̃`iJRe(G([s/.X A(czf,HB<*ĉL</<Ş8b>QJ*Yau I/GNQ^|jhX93yzJ>%TTڴ=,@T}”DuYP U(ژȹ#|I:34le TLb!J[[jb^A_Tz?m} K?wM\}|nPX,| D <ىJS\]%Ay7ӻ6zd*m|3O޴1Sί혩قO՜VϬ2!Sѻy/ ^ L$L.Bk+ݯftĒ\XTz(B N: RZTߚ7rvkJo78TutQuᒢl7'XkߕGfy~w~95vqY$j] gAlE*( +苜;MƸbb;ڡg= حG>)bV)C/<R. }ITW}XSҰP&ϺIdac::"Q͂ =٭[~RU{C5bY#A#qA)+H2Yk^Gb+N TrYjOd3ѨR ۑ9rrfKZKka5boֈ ^zq̅|qɁz~^h҃@@6Gx"J@D:XO>4}`C* 9cSчqǡPa*lYf\ ]{No8š]EG2яRhRDžhT1_=h!gk hmp!nzL$ki `FMg9^tT@d6PY$uKr L8Sd~Tiy+曯:Y{m/^bW[A6o:HޏKpf~wO?Y~h6Ϗc3gf^\2k Ѝ0neǓ~2| N_V_ǚ6]vwl2ҭ_Ү*PnëSt{އûP]S^@vQbBq}nږOeZ~UFc/?uwY:ij:A-sok}XmommD<6]_vOoMجo6ˢXeuw*kR"kֱ97,ƟٸoG;WG8]~{ysݓnak.3GK*t4n~(~>fnAl˽ﱼznEmZ|˻[~a. q;[RN g~~ e@Iޞfnopx^y:;0+znV|SlX٬%'09Œg'2-no:z-ʮJ[m7{RJ䄐X4[ А(\"x  4B\0K@,ۃ܎ѸӢM+=_OUx̻֩ԲlC9-2J)+fvY*|p>ԌnnO&/80K:R|,c=tIZ ARhX9&h0O#К`1ؼǏ}I IS.I6Ŧ([c\UlR^C) g#e ¡FøAN t)Қ1(:% Fna;©~1KOաzypTz]_j=v^WԱlīX^ɵt,:s+^R ;fM#SF'FFkFQ"F(*}Hoz: vv"?JΟ./!f;/d9z(k?m>O~J=-*hM.4@7I+hO-`khMkiI;!c()):M 0DoQI%*!{eDX T¶XE0I¬U#!7rvJNNNݡy1_h*<0oy!zmq,'4 7|S%DBcՍ>5mh 服Bi^:q;KK)$FIҕ \6Z:u\H'ՊA2eQ"2XYxk3D>&eF)  :Y=٣‚#$HցNу"(uFc&@+R r:b CLQG'Ě-R Y\TIA [J#j ;Ep B},n$B wT W>(TDEŢ!W9/s>[tsh&ޣȰ!pEМ7Y8YA VUvs܈#4K^Ma%sYqr9+/:ԐI,D}|\{ B 2a2R'̐[ц*1"uA %Amr\]>7|Sd1v#" q!ٛ+Rm`F`3N^scfmrlz>Ů_c2֎Ku҇ cHnBlLƸ1cL[ZBTZXh7dFo`j2>g_yj6cP2j3ڨu3}o}~ԍtJr%F78ytAhAEK_篣.a}ZY e!?1\d(uEX-R*e||{7]}l-/l)(۸맾{N_o{"6$F}s Ӯ\▩sxsV+ nxtutӍ>=lǤ1;zN#^|w+#t22v3sCk;&!g+Ȱ[\cK\=(2v;=d:ʳfs1ߺ=ܬ~8qe:|5 d14- Ԩ"$}-?,QH) ۮR TjR`٧s0sη?y:'*?d &C5Q[elW֜Vb͍ =`$ -mq,EZRiswxeq2܁ [G4x1KtQW:hBQ1$)lc9ːT_RRy'TuAdI {]0(y1"3!p`6L@l!ԫ2FdǛ%! ! s))mg2$1."-Z#XR"wmI˾~0p8'9,-.-!SҚ&ekg)q(>")gWzԑ$Y:Pz0*9\!EX'ݽKai.5<& h D+"DC n"0u !9e31rJ.xQ?_tk rlU4­Pnѭ؞61?M$n O8D-lt}7`ZBX gs8a0]>c}ǮV8m]Z&k%#2o\p$iKϢʟeA24 ye$ )tLT&QO93%5eK5rdjc.8.#pc˓墳ta `i-_;PœPUL6'SeJUL-5^9SIWVE%A^g%svuN-xmV BxL@0@W׶,VZI@O+C(f*MV]5!Fsm \ă5_>DvwBlQg|(jVr)| Uմlj,@ds"cЌYKQ98'RDU.i TIȨB2J7 KmhrAk 0R/EJض۴EzrѦRWt hOQ!5F؁6=TIlH,/FMD-si $"#[:YBas#HN~ VA~3 6IGԂ' LJEsbi0SR&eG`AEAE @AMV'Vǿv=R3G᠕TE`P]N]J]QcP 7;g:yfN1tA 0V贗]>ФIc\I>&FМ9ځ: \=o1~%/׀^-Z;Å)ʸ N XOiMH@(\s- ZQ M1jCT1‘NxeTO4S< EK2B4Z@*A JQIAQS@3g Tā84O[hdN Z8XskeI0`-  %ADcʉ\LZm^XbdtZ%B2Է&GmjLB_ i mz=+0ZgْԆ\7P{j:IldGa>X+'D#0D6Sƙ,*BJBvj|s5 hBkei^O{W2ޥ\z$_i^Fҫ>q?$ɿz_lOcCbIps G9)f{W2g<^_u/׷yLq.gΪʮqC;@z^{~JNSYC$T~B(쩇B6Z;Dkc$h}aN\\g%L_udީf_M L&BÊσL}fgp{!TI"`漥)5"ٌk:e2N4̀G΄`DAC۴l2(O2{ hn vj'Y\-;z-F pÛBu/.<0%$*xв(wDqϛ:AFv|pxHtD!k5b5}ߡwUYPܠ13MGPK}y;HsO 2(T73Iu)^U|aO ڸb{vۚEЧؘwZs3mft5<* "$a" $SZHޱ[]g)I4q1)һnץE %QY @70p$YFd*GEjQ)N(CgJ;7Sz(́>]±`'8 BvΗ^Qv?ˎ>PvBkݪ6Ƨs3M?}e&-GF f/Dp.ulȶ)Ye[]mCrrQI:ɤ:U\dS|-2,fF.B%O.ˏkݿkq!-U)Hqatd ܀unC/Hp/3/$'GA߈Y~4fKeA۹0qa83ĨB:G't8LǎtΟҹMRW`8uɕTUVcWWJ#;uHn̙ޢoQrQ޻9k$U80&z΀w?{CoTg&ϱ'o{VGD(y{@zU9.ٽ7_gf^#PX/ 6 Wbf<=s. BxQLFb~iq~.9wꌢdϝY,ctz_}RkW Ůa\r=+lCoOQ_ezʬ=)O/?}a+(ךY ;MI$/r'2sWkdalHE)x&ef9.3Yt:9~=2Zoaej>v,ST=CL &jUVu4zJep՞y2 sG2+g*狝u٨+Bj8!u+*NF]er?uݷT ѩ'S 3VË88=bB~UT"ShpB$/ჼh!9R,-"3Eb̩%'.KN\&WIhkSvRlBs&ȝ "D(ă SZ0˳;]ƹ?p*2zlC2p%l^onRw/Lsf)XS6IPE0|l#6 !WXzb;|~+|*[oM`MQkY`w[Ofmth֝u`;oIy4]u ~iZ5!&ji.*wI4^e0 Y]痶6pN9s Ŷs fi-LʛG?@pۅ&R"Z^8l!)a ?E3v1wEkd 5l;dp2m!CPy(WHꅎ1qфdiLnfu(>E)DL9hmwxSJ\p8s1i5cpcl`20h2/ @>/ÈoGxkdm^ӦGdcLɕ/pI@Pϧ=W-"b= nE#q;t,OXL> O'i9HR")zYpl qdb|ߌԸE#;,o>g(L) ^%)IhEB!JY;XGAܦX22E*^`u +?$E] &s{s ( Ewd\⛢x۳mY,~tvr2or]=;[<$F5eWHCMIW0? mWR~S* J0_Z}cDFϋG^Q}k?f7]7qUAjb|uT)BCƺ\R;[s_vA.f{y#<_~ӟ6jՓT5u/om_el[oLuL%c5!5ܿ7kL8 oZ~z}78;{&jxtKFP/h|#cS+ }˜_Lˠ*7+cjk>fo◫])+fIsP 0ɭ 'l|py }ޢys.JB(I@GJbuсO*` *lD$Eme{jPiѢǸU.Kܛ6 ]x'|85!D]oGWD...{]].~K`/kIɉ~!EI$EQM'Nde٬za*Stu=yݘdI=qgv?ȕ$ORS*=qr~/6bHXɏ+^"1 ATܫt 2n³ +% üf26lAЮ‘Ț@x$yAI{0(@==i*l&"kO##Mv%!S> ȠϲX› Z"sL@+ܣӘj)/L'脳1H rt?1ܛ1B1Td-kdeP:Yμw }֋$Sä3 4>w-ڝkp3w;l.݁+?;q*}M84:m^4Rɲ܋f^G &dt98ǀTb̑cA{CTT#B軗V`UA<1Al2!ˍAz1Uj m"?;ҿ|sC7[?Ga0gP-XF.j<~S*kˤߐV^Jj,Xqqޕ]8?j?fj 3l7S0ʵFLoj5i$x=M J&ZcNIcұIWؤd -wALG 5FШ9Fg(oq%n}{/ n}[qz@mCI!ly +41KS.MYZx[:^FP%Y,1aтT#PN&NTBAe]CCכ>+HzqX llىem#{jgsU_FagǔgI邎,( ImiqTxG#xKU:,-.Q2eFsl{C#t]:Q=jͶvpKj{!WN]䯝 'Wf ^к U#}^FQ0pN6 ov[}fɈ1t !w$t<e!0DwS(Tj]*nk 0E6[*TA=Dm`e2׶檑5w_ ҙόH[Μh%s[ShM1}cl唱Z)s\]~ fҩOFȔEF`6a^7,2^aJ,0N%U$cp[53tJ")'(|w/t#cаr<֭21;z8~,a*X: :HfRtB rcW9Z'\Ieerpw$%rLRKPj"d8RҫrkH$ʁayY_ޫ>j[>zL1Oڌ+zdBRH7At}bj5wFKiьFbG,t zw׊#HXT ~^G{ins\M ]#״qg8%3L#L% $jcEԚMi%r[d-bS#觅еUGM⷏Y0\B&X%pJŵ4 >,2E$= n / **$g"V B*b4H8J+'>NHGK>stRygdɇ*o?xP# f̉G.5L/Ɵ.iYR=ٲ'-GWC0 ɰͰ<cǒ—H(4Aeu=KU3O&t/ZJ8OJWKj탁n;| uJd)g{=i.Oba"wBD@mQھ"N}+-X-K_INoJte̖f?}8&2mHat9쏛QpKQ!Kn_ƣ4ӛjKAgc~gnbmň%-ݎ~\gUZV+`7[i罖,ʓ_,h@ KWj͗pUD횕WQh7yͳt֣u;C+c3Ι$IN6+B~6atMk]1i'I_'٩wk2,u5t\ܫRЁt&ph˳avꃴC_d4&Ƽ݌EۤUki1œN] /Dfޚ}5_Q#{]LVa l,k#SF&tlxZ"/ yQd.U//JQY@>x3"BQgt{xdiKT㎤ {uNAK^Uzei ϔ~S6ѽ}7AN儰o3~Z!v˪W/ ?ώ~h (o[fU|ͼ 6lTx bC}sC5p?rzcg$gĮW4eDhZk){)צW=X˰yJe<WWcGpKeۜ01"1vȀt4z˔ ~C@koAwG=R5fI=y!w:yBJ(C/k,'!\6.{S@Ŷz:/iBڋ-;.x{ҲuVzgS@^^L &G#b֧,B:!*E6C9Lvn"f| m-ǔ,Jeiy)u21 ',^j3T(4GZzҦE~]Prgrk9N,iOt5NEN>< }B 1A#_]{xR,RO1Rcx9R$.:t y*o݌B,,s%)#N$D dd逥]()'Àrf!ZnN3S/iN;BA$ ̂i煎 R1H;4RQr.3w{W` A N{HjdHU\[#ҢO?v=Fl+s/3%G#011sJ!&P:e\h]dք=ltIZX%q8,}JoD\1=;_U#ge'"ْY"#[/" mxe~̏*̽siU:Y[yu]^"5  NظyDd L X 7a0hf0(o,8sT)2kYFj&eL<M{3hz#آH 9*$^EBmRȪ[u'#wN(ѨL)g9lgTvH"#䑕Rg;w,řbM1]>{4sC B%0w 3\ƀĿz7)]p-|s^OeTTQ8D)z&wX{27å4EFCl(@KP{n!V kIs#<2 y}0LB_E=/\۾l{uMl Vzttp}#v-lաkPV}gӝ?Abq1y-=FuTGh!:C9;r6ɇ9{:;NuSB=1˜OXP{wdϨ`o 5pRP*RXcḧl6ѥ'l˒-9e(d0Cˣ$qgmx@<#0j ڥQx8͜x0>DE43">#m/h&914ƅHrJ)+S͖AY˾jsONC7M Ȯ,oj(iǁ4X..Us:gi'J;,{^~$]/c4aS߉R}Bh`OeJ/'^ VT`*`2`UHzzݹjWUOWy'̞w#/^:~Om/hMv2 u/]޻1L;w6j6~O~HEG qնjppru0|}r5nxi7 {aIWO̻㞻xd,O>0*z5{vvu_iLU??x,]5}ŤUVh}.rhܗJuaIޞ.H+?mƉʦZכDŊ8rYLPׂ˵wM%Yۋ ;"}y/S]ϓw{3~\Ytϸ9M>?ڍ|zߗ7t%SƮҿ'>yk~w[<)UXl|:d lc'R& aO<֦Ga%Y W \\`djKTjX+հһg#Eb7%ʅ߬ ^:̀i~t4Ht% mlqPQ\ŅK>h{L+ ߽\e1o/{;գdt(/|ex` 79\RópH.Q/7xiɵ^>[Tl3i~ n&}}xS7ㆬznPTK+ooͲq.iܥU74_gәXqجpc/ʩcQ"'7#U[]>'dr-^j}Jb->gos^:7ά󾏬=mHߟZ?,x\ۯzc&n9TilbA$gVIQբ9$5LM9X4Bd6'N\]:jS%ƥƶcVݚ~awV 0)ma, N6d+j3+Aܢ<~3h-6RJğǠTKԻHّ]&pu]LBLJZ$mKp-kmw}&T/ftcw[x38f(CMQ6.\Lj|`pfN>շZ]xd[Rv\m{@a,7ĎM;Sgs* A|4t4JHk1zPt31BBqx {ZF$z;Ү?!jI{6Sgd*>̩F7D< p};n.`GTL˝ZQN"8gRIQ[ۦS{fuƖ&kuľ8L<,JGj-Y QXcKR4*biqTA!7z1dm4vQ7xԼXljh ڐ749vj* rU C^dEc `jZcC qȎޮmgCm.5ێ03/3 *ђQ'R4CFn1 xF,:WmBFp/up;OmqDq nDY9*`yUєYYMɖ Uڈn(E n(T4 Fc$>;:M{)vp F)]*jpWO &ԘإQ dBX+٦nsvh#eRJ$>೭ [C`!S b03q΍"f#~j;&r ``.C)i$8()8ؙljJA4 ʃA`RD!VZb %Rah,T \!ԖPFcx ulyŜ̸kpA!ѾY} `"E8HfȲ|dC4쒋-x_ZM챵p]&2ЙI{ܠˍu;k/Z1/M5iuN}B: FH 'k! D&wfEcU9zw%\ ^ND="?].1I|4xSu`e0%Mh:F%5R@ڣԡ*h`>ڽ?.Wve>\_Wiq\? BQ&nőz 4#pt5TZ p°ۆ?|if\d]/ q)kGCkW62G?[ Io;f.U%A9H\͡ xBn$Dh6?%\&w*T~ԭC A@v%edi90+P}s2*TO+I[jJ[NnE!Lj(HSyc(n5 ‘5 ,* F5H*܃G`r X:X qT1ٕdZ IA7 "7^fjcRf=Ԭ=ɗqԘ3i*;4"2᪥{'5e ve=YcPbrcToRE\SLhU\X 6 ,k@(B`Fo2Sp䋮zJuc^lO38%0ƇJ$;U{:UHO޵q#ۿ"ۺg~vI0k$_HIV^=#%3EVS"P,) DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%7 3R!@ \@V+3GJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%p@_[}RE[~bN$}SQ_V$IG$.:p WANkm2+,φet([S,pKψ6{ QJAtЕ: ut#U\VU7gΠ@WMϥgDWXl  ]!ZNWJ(ˈS&۾jt:++lM6tp=υb+D"])8)v `Į0}+D8 ʵWYZmoi 6G{x EώG߷⚔u ']ڛP#Za?_~h1_mNq6#!̏5?V uO?Fw(.WiSQt2̪r{sQ_/g tu`Gߣ (<ݫ}6r5xb SQLzUU>@\yU3Wr V OQWg5x|-v˫YDg^'&r;7FF˛{+y,ˢIJV":'Nr{L#K.,565Jٵ,JY!, )| uRQ-+}L¯d [P ,*-}r{|7C.P: zPTP7D+e>yW\ ާA!J/HW|]St>u,ϞxBjteDWiX"B6MD{ Jint南^eDWX|41\ ;R]}Ctlz}¬N<3 cWwtDWmzn>#ʰl l jfNWo+s*#\l j ]!綵OW3߃:gCW\ wB!ҕx.M51BCŸ-cB6{Bh|y𺶗u_).ҾP}4:䔠$ h]vSzU]|Q(mrgl |[\?@G_!d!cm e%x+|N!gm>!gq S2+l\6tpͅ=]JΈHW91(|6tpy6 V?D3uN+]!\3h}$lMDWӕ9 st{ npݑ턖+]JEW]6=W\+]!`㳡+{U7wDW+<#/qDS•:BFekBЕT>uD" *wB}CWJsEFt XWW\ *wBDW+Z9,cB4'Iqԭb6--%[*>b4Q5=?SJe3<i)wRnj Ph9Cjuf2opy6$Dx}3D)`VBI؛l jr+Dm QR nʈc.OWRIҕ3I]!`/+Y6iPt(%A &4(le>IW\ ?kQZoMoOeOulݱC:vdgZy4n(MҠlDWmzn2#Vl {snh;]!֙DWá+a.#Fl [q = P Gt5@' :B*gњ[W] aPp41|X1g'v s&KbLn*&,В-V錬 \6V8Eqw+QJAV ,9p)\ jwBF] /g6< J ]!ZNWR{ҕB[]`#?BBW()Wt(-`bfDWh\1 ]!\.s+D+| Q*Nt5@޿lt{vk.y(Zw HP CtM<jlj dCW֙=]t刮ك}:VGw*B:\J%HWysWXWءnh;]!ʾGt"tdgDWڹnp˅mAD 0ʴ> FB53p3[(]j-DK#Y:K*Y܁2NGN=V9ѿC<:uSR7)BZZF-Y͡<33CZ|=3D)9yf Ҹ +γ++m.thu@ !Έt( ^Yy,]`/E6tpȅv}+Di(`tY%A`Q&&BR"ۡ+`)Ewl v{lgc9Pm>;Е'zhs m3+φղt(% ] iaƂ؉ >폧+@i# ]IŹ\eCWW\ *wBCWJi]FtЕBQ6th}+@)'"]Ӥ> 6q'R*Ro)[4)ꤩ:4qž1?As>^*^IpomMgШp^|ѼbQX+|Ud^ ݿn%[znh3#@6CBBY?_5X;b>E)z ~^"޽w`X?n\N! +5UaҶ,׮•:ƨlܗs]ݔ[s'ypk.[=3 +^⏟iwݝ?v/uxJ|?ߛ)D }s=Y.F{z`j J!]1<||zjDS1*Rż牛1" U2I ]&"f-w ' Wh etҧy(>V [8WAqV&P`TNJJ9yY2L :Vm ;w^qzO'E2>8^A\Zi,v%?} &͇y]O/w9^xVgrlI L a=.wRjhN_-]75VV: _utU%KoHJV19Vv0SFURYh5%K* R'S%A+y]yV'+Or B#?#E|7L jmJVWɻ"K)' k0[A9wទ&|y…bKY[#|c-A9,E^9W%V„脳u ~|j:FmƘ+$0Ƽd-PV{, e]R$'9LB[8}7=|zv wf ߛ\Mg`?}Ϋy6m{˻WA1nfphbh"a:hSy65<ن q-TiNv`c(-b,B~gJ9hX8BEn /q*:at!@o|q_G1SΗh;o7n½D>\1WADlyS Mc~kL`(}JV~턳Z/WkZo< txC4I[-6|M˷֩9+fE+WcO:|@> yhd _l6)kZ]r7z-trKzk=eXtkJ7y'n0ݚR?:n 5zl7 x黋9[SۋIyl ' xË~U $Prɻb>ݻc1:灐 i}{ fNFMQp֗Oڷe(\k}Gi5>zhxb3w@i+x$ng۰q)&oD(ikm#I u?_ p$r¨궵Iz$EQMdžejf8,tU_u=:}hrcDHRAƗ.jiZnCk \nGrw!brE{ˆ&H6euZ9d3l2٘e6)TX (Y &$}Ȩ! PL[mg&Kc[3q]ڤ]j }nwU\q@@zEn SS[T0=\bDxUiTszНq;/tG +,JgJ /; y@8@֜8=HYsB@֜1и]d[8Mh˘\.I\v©X\s)U&P'PL\2M.hP U` *YT =b1m&ΞW)'hwg/y~joV-;E'9=/pAȲx4^t!AT:Jgr BdBJ]+R!T†\.$ծU>V$A:Ei SҪ +q3qV5ZF>^J̡\,go^nkAT'lL>v5{bUɶ,=Eݏ +T [2}.%˗W)e (Te^n;Ѩ22X)Rw i B=U-y;jp Z,[tQXFEH \99 )=&_T" 6h.Dkk:PˊWOt<^H򳧠L"ղpVbBP2Ʀ b23HKBf*|F_,ڠ'}2Q,uQ3q C}T)x3d@($Q9p9%ABJ*#(2z =Mή=៶Yc.(uQ e!P34OAXɬkj`5Χ6ﯠc7+ko?<}>ooL7R%3; ;3>#XPB6w̭ʺE݌wnxb6pSHv{E+DHѲ0ii rQ#o^DQ< 'RTvm&]WFQP( mbLR"(g_T)g51i&H:dsv~q^|g| ys.lvٹ[@<qΪg[[Ǘ a~oBPPuFxw ): ӢjqOJ ~!0Lt"`EZQdrVTA牮糳K!ڋK|~.M#tL//ޮ:O?^VұLfJ!JcmbYk],>2O.CVSD1dj9fJ̒u3J2jd*I`1Ĺ[7o@kmh}PhW͜bQh;0KrZ8gmo[A*( +4'#a{5T>]]N Fw+igm]km i fEXld=肦sM,t%ƈVиU| /J@)~$~ ~3gap]l KL; hKL9ARTTLM!1""Թƃa Pk OXl"YQ6:O^LD/R^00a(Ѧ4$R,f yy}=JOm[o; lm_~s+,ȆEK$R&G$#Z>&}-XrA0T `x- g,ϷfyUٔ49x9@$7UJ։h"bf<y`6Ol9{&B(NLT`dȔh.53BYN"DT ʹ$Z5oބ[tv_8T0tPEY[UlZˋX6͸V`V2D%`5TNz0¡NE*`oBn%,j-NWd-^~e?FpODXis6){A-w9ϣ>g~1=?8׷+R_o][jY|Յ6{!//98ayo9? iOD?-/+jyٖHzP>E@OQ :Iv5ލlmgk6LFu>>zo)X ^\y 8"H m2էe:ɣT/M QLR\ͻmXe9_/ܗt(r.GjYs:w6 xcϠqp![$cej. <anE5}@:o|::M^x"wݴ$yӾӟ{:>A4N%RNWSW7-K#tKQ7w:Uimc#ؙJTZ^xdW嬶 ^ hx_C)uA/ɲ{cg-=wںV*D o9[uyd-XXfmr]7Vb׳L~rX"Ig46ҵcK<`kۣO<XΌYZer|[]yLǨt%c ("0 `0:Kd:Y!x%yg1cw|lO\vځ<Fq to&<|vz-{c@bv>. M gd+*.8]^A<:{7]̥<Cj& Hy;ˈ2 YHE@C +;l@]hyǏ+s4tyM<o+/mX-׫/C=_-:|s6Y}AΎd0F?"r֧? zE:#LעϦ.FK`/q+ӜdL3Ol;v~܀%W:OFvHt!a\•؆yCm&1%RD$TT j]k)A0Ȱ\*k)0C6 Bw7,7@Dv%}RCG'i_?#v.hs(eyq[uujiɱf؅3)u~Ө%ק:=)JdtA `AH661)D5H`[J\SrƜ)  9S#%1H,U{H]Hb4ø9*u6gp&`9pS=~߿ft^)Q|hX?]s JEc5OA-XU^')Hڶ~ \Pl'%l2$SJ6{fjPtNz8w+j~3뷸2~gO%׹<94ٜHbvlJE?uxN^e)LS>{[j|}4{'gePǪTWF ӽ2fY趖zWJESR'oa&_ d(,lZ[fmaf8ƶו..+2R,cqI޷< :gobۜHTٕ,:4Q- 5ZMPB-|J_nux]=$%tJj6uB@Nr(-)oJ-vN<lvj}c= ؝[.؃Y)C,/]`-Rb6֫ .c%) m+ck^4 %0N:זLE*3 FUabH$&լ6n{ؓUjO"6Z""q׌7LфO^)#3$MH:8AT(J,N kUI#j-x)Iapvq⋁nno?Led;/s~E,˱e9KV?dU="E86lx6a ǑѪȑTejnj383wE/oisgZ%/I8WkI0^$8+yw +)1 g,RAȋ}ÝiǶ|(ẇr;>| {hl Ryj$1qOُYӨfg/l-bOMzT@xo< BA^۴>9S%;%p+|ϘA%%pK,Sw$3pԚtحtΦpSN9I>BIbtXAFh]6QЖ}ۙˮRD3H)*WUah)KrG9Eڵۻ38f).ٴGRË$6<̷HJkmo>~[pﺦxG]F,w_ʍi i &%̅C(d 푴7c2++/c}-P_u~˚WŰ}Sa)J6{i co㯯 8Wl^==-N7mN1ڟ~SNEEɢ`{yjݯWwJ5 RFH/GqwsfE.tqr&-]}ߖ7\^^/Bmsy/`Jb{ܟnd&*gʺn.+{߱*jw< l%RحOKY?Tb}ieOL;ZۺU?EWgk%ij٨=Xc@L+k7+ӡMt~V,O/C7-f"ܶEzYfدclX;=c3/ jH˥gI|U[֞cj])}~#o쯮gyN'˯WmIi/Y6o sW?yVny}>!3f8BM5u⽬VFVY|aڙ5߫;֍9̫Y|zxI_'WhU:x~2n+C?in__Dɮ Vi2/wYukմ.뽒Vcz_< !d-B£9P,8TL,>PŀO>m޷899kY[}*|]VPjY6bt LM]EӅRb.f\TߙNl%w~lJ\$p_E5'jLAK ѳpy>7, ׷h䚷~$*YխMUf )j &{'U& `<5s}< b2Tlӥ*'@:t?w<kkwpDa֏Iwy]8u⌟}T'7w>|n[=tf~tUH{8 mj>%'wz WD!B7ΧH/)JՖݻƽH˻] ȣFSNg;yu==yZgŃ6fvlB m&! A!ȵ+:e\Zt4i\D)A٥ %i!ӵG0SR& DYY0%E(5y`&"svGyώvه}Œ5u`]|K)\7"`E068RraТ55W/ydv$ou/Nx;7{oHc΀c^V۾c^Q;c~[kUCW-UEi+G]բCW7'UEkT骢wЕ^qܫ%웮:vbt ztUGhzEWJlNW#]m#zZ 7  ]1\b(tUw(Jj'DW p;hUEHWHWh@5 b0p *= vDiq+Է#Jj6~=IPfQxRMh[xrL̐j"Xm'Ľ9e,dn yZ'O..b/2EPR"3NF:c$N.b$NV|C]ǯK!dbMY6 Ej!Dkkuy''2,baU$1\=V>6(эfdC+`(`U0hu*J#]}7tYgd vI=/uBJݳ@W0ն3єU7o/2v1M曹M0% ]1cPG5(AvfvAb7f8^F`(^{/D2oa+CW ]U{(09 `*UEtUQ*7ҕ Õ BWm骢\Ypʁ(DW t B *UEI4wCWrC#~E읮:֨LWҞ箺5{,JtAj@tUk;p ]UNWtut%%=]}KvBk@*J9zWHW(0r@tŀ`}]uC+{]U+xF:RXg]U +\cBWsgdJ!ҕSVZ7 bY}HW ]ᆢ#A©UPKW9Ǽ}%mvC= ]HWۊUtL{/ ։\@:Е$EW p HUEr+M0 ǻbNUE{OW#] ]Ռv4977 V΍:AMV fy/2B}(Aaz^hPL=F3a:n3&i;CӤI MWN%g+LCW7=PItUbp*J5."]#eN *\Cd骢j+k`3J'V`3huXg7j \Ul VBW}+F b\b8bYwDk&5?N>LDJxܴrN^^Z(Ex_X~SVq^,UU/믖n?޿jt]%sA?|cO_0ͷ7neyVWΛYǓk.7xgXkۯ}V~sϤ_.w߼7oapˋZ;yǖr$Iۚx#5/o_}Tp3Q1% &[_ݲ8Li ߸E#0_.yI8쥲F(ɖ bI)"d!j*z<*ZS<#$ɟ)d}vߞ\~4靿ۿ5 99K \,NNe[58yBE^̋P:)$r\VϷ)' lF\0]>" m$B1,&Pň},;:h~(wvF1%erBZKưK_DTNw\V9 BZwsf V`XJRda8P*f$g `IKMz-ZZΡ]uT QLA60*B!a`OQ2((αMj!Oy]z Bb,Rj`mfPԎCUٺBdt} Z-L9%zӚ&C2BmLT ZW"M2v%PXT0-&?hkC,h*BJDWWFs5ERrLӬL u4BN8%i|u11m2rKK`]`-̒.x6+Oۜ7BjUȡ(->Cs"P=$(1lњI8^  h-͡]ǡmtۭ8]5 I&j<=E=J] E˳ǚژ[]Jq%YеYec]Q?s%xYPUS&Ii1 k9 2n`UhU%`}5 eo LI , RlG?5mB7BJJJ#L2b!]AАN6p.F}YBL dJS_%Ce:|=Ʋd2*zbJe5͐jPoB+"X2nP(SP֝P COBB("2m ؍ttg-JC(]eԭ9+ƒ s<:M;"q b004KPI!0ά :P%@ `-N :2]:Xi65lD]`#)#6*H( ;:I 85)mZUW.#{/NZ#.RVK􄔞4f^n!GAU3_Mr^KYvJDA >&dY\D d`niXEjE},B\FФBufp? rI+th7H+fD$' GcE( / U~6C^-re!]m/oj*ՂYeUw :=&"X 3 o b9PT8xi7olJo:@GVZLAG=$]I4T*#d`&SQ]CNZ,J茸pΠy'K =`Ye>ZbM=A!%DB>hwAjy:EjC$*KtPK|̡cLGuu $$5(ڽ,C),a3m>%TB;뒄 Xut kR([|Pm!ڽjF,F,Z{65 (DhQ:fƮn6 WHQ-fWbd@BUq@ơ"2ΪU%Cʰ"  De#@ A1*'J s4+it=liVԀʬ$޲[6RSҥ*x/GT^`!-Ak QH6 WZR0]UK mry7Xqν9tYXx2|>kuT$K n=CnNN44z63 Z';-,F ݚk)DzI(ΓDCkBm1&Mz~{4$CtM](:H_PFU~B:CWW4"坩y0B.8! R G^Yrp.H/5P07!kތk/n"AE6dn*[ /0yF{qWXoK9\o.i:ݿ|>ynғyыἶ|f 0 tۍtsM/8bqvz*"GES:ж :r>vy_olʊx}husuG| m붜Lsqr-Cڈ0(gJ*J1 ӣqm8>tŀN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'_ dUxD: @+CX0-3A8=@@=_a%4Hd';>|k#8@.pF]N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN @(bHIDx@h@@k; vXJv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; 619 8'F5'A, F@tݱ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tCOOnoCyi6_,Xy+m1ayy1vYs.~:rs[YY|6Y|~qWK7]~8z (߰љꋧ'z+KCWM F:DoPBPɶH>ēf3[?^H[Ųu?Ln{\@E1gϷWkŰ߰2aϬ>wM&8EL~GM`T pTa4tEphAڏ NWk#ܸ֮J3"j4 5kPZtutbbDG hNWD ]zDDzt`K}j!P+{LW:2Z q+Jc+y͡ڴtuS?qn0ѽ{u?￟]k[l Dl ziӹs"Ǟ&c[/] 5"t˔MYjqw5xQҘ.Fх~2Y)OƟ|@~8p2#8gpc˄VC˄8ǘ>mWtUdZA|R}s^N~{mk}:ru^W/r?g=ӿ!ʿF{<݄C߯>Eu5Wr7_y _p4=Vw}du]oo w~X 5%JL㳫ޜl]tgo/{c{p EtMoXi1||r5[1l8|Das55#fniWG=p;>xL`ڲ 6|ˢ`tOTMhjZz\颵"7VDo<s\deK俴hCm/Obxi3yR֑ez9C1MZ\[Q庤s'Z P+"E|Vk#˺~[,m*.s*׫I>['-tŰzGD%7?Wf3_IV5Ջ?a|xsMMkhNjkmH@_.uK~0wN I|S$á$;R#o%Z K4W5U/0CŤ_^L cK{Mklf l _Vy zuGsKւƦn{FZItIKn4*;H&(\J_K$,eaCsI#D#,b8)'P Y^$%tJ9ʸ'CpЈjlaa|(<;p$ \SCLPQɨ9<@iqb=X1tFz0p4b \, Z/ٷ-bu7YڗSWQ.q/+$M$I" LgixNJh26O7g8\YzNi %S{JD4ZT$DM +N8k'Ӊp~^%qRAKk/^2FI S>"%k쨴<$dNRߙ;֏Iő}%t2ZGVkˮ oAdGx %iypC>N7RƷW8 TӾKg=g*5q9$z7}^Ƽ( nT?佌?1?$YvmM>+;OAtSƢ7>ˢ G#nB31gɟEo5 1ٛf7ĦY̱A۸,_ CR nQaw] _?~:ڭh7x>[Z.]? ɬ׫dTB9(.rSÛK^'5h^}Y|\P傆`M0G` &Eh{l3Bc(KJ.9w eЎu")X')*mDZZW7 M7*oC s&M!O'N&z4z!u0tMN.-7?X4ȥ_,S[FK]{You~1Wib3SMpP8_9O(ǖm&ekv)s)ڂ%5L]ݬ~UrE- Ҟ#iVwPBb/[V?cŝ^F-b5%4ԴA7)ĔLBc-)jLVCAχm7{-_&z(+&i~yBtmf7^&Ȧj~\i#}c91okU1.- m,`+_i˝m+t4:F;ǸDh#<YH:0v1L3#S3XAyV.bM/#_Awl`4R ОL'3a$Ot3ȋ*gVQAp3z;(\u3؎$ A\ZM=L80fQ] !{&Q?7V*ÌkCm4!;O]-gP6?IFε߫8Yʡ͋|;OlIvL3u'E7}%w%) dkMr+a2xqba@ geNreF15~8K0?qvn՛s>[ >Yf L_C,^@Np)OA ]bV ßϯ^&x[;=0D.0ܠ.9; ʥw1nC[ձha^, e`[=o? o0rDFI!AY rH9[5vj@_~V,j3~}٫f7.6>h38m3W2Q &#(}$ܠ%X̻6j1lȰ\j Xe6PWݡ*tmGjVX΃fI 0@Q>T3NkUŔ?kT@x'z&}u jU{,F9gU#Nn߈$C1DЀ0>Dyym 3L=su1,[el_gf tgbf*^P? ,¸w Lve]Yi)MX{&XSNyi3`6MhW]A ˆՂ[?ɛѢiWoD]LtYYlvyyyLv T3=\_+vOJ 7aīR+vН[e6#Z0P-0;S ؼJ?t7sZbci56yz5lѷl}-`Zsܝ٨D5dTs >Ng\cj:{G/.\BZ)-HI[V.mʫ-l/a,lؤn6 k?/[ #%f`fa5M'uY4:6 *z>%Aw~lZ[ǼFx86aH''HUu(yRҸ}<(/ځNڑ:1%wJSUO+kLk׉&i-v&ĂTBX:τB)bc$iK,c"0 I;NS9i#BG[1GYd,~S]ϗf]oTK{"*؞ I܆<}0,l .d+U-^X6ypZo:rbLbHAic Kt"$1au:hW!FX@:#g[8rIXIX F+GD+-1t$N p½lIX",9:6?gY ʞw*&1,&i%vTh1PTj` +d1UJGùOc]EO?%1O4 UJUIO'daLYr&@f,AyИ჈ q"rQ뜅*s_wnfwnGZ9eǒ HӰ ƄE3V5N0-qFpTv :9RSayܦӌFGo簠xbt,[ `kcLJ*anzi"> `'xT{;[fmj{A%24F }+kʔMr&Ya[*DTg[b[+{Щe9e]`Mn dd%7T( q4(]*<%'tN*6T =-#D9D `P@!*3JR1ȩӄɁv1=jGνrn팁f,e|@;F@z.ǫh6@TsZ23,cДZK@pR\DipRVlL*21F)!h~0l^Vx<";;* }cnuE׾;@UNOnvs<'BG2v WawHq9 d Mΐ/IH-:92{W5%R$L*9tRVejQL NpsCW95d:,??5k"rbt5Tb]'נurވ WUbAA.X5dvݩQg&q"ajɕ^<9w;*YsK?yk6* f'VF%CT9/=}%~}o3[{/>vǕ?[^8T$NY[".`@L7H J̽V931\ޱ3ld􄵔4;OTQZBg]T P6"M0֦40pd9wG`p084OAcE..2>ݤ~GݵyNlvty#vJi %[|蒊ZZ AX_dc-5^~vLًEII¦Mԡ`nm5s4RP͹;by٢cڃYǡQ8E ޺I`aULL C/HJEvt*LyU [qje! 2T5F5!{GQm0 d!Co9wÞ7uiC#b8"x;h@D6bBs$e# S}+jb% )V1P=*VgGf+4L9 MDes8"֜{hI[;ט:z).NqR #XB "A@M׬|BH%PS\|85x<@T};g2SDէQ Oݝ7e?>NW!T\6gSAu%hcwLyOA;Ϟ(KJm.<3H61'Ut+^ zα*ڥ@8Ԙ)0fӪ$m7SJ}gotˍ+KtH\u$MMNsʔAgH|4G*D- )4˻ie(oٺ|}2l_0j|Z^#:En*mu9 q޽;mR J yU'\,fq9ylgC5yhG%V!%2Y[w1Y6L(0{gZCX=bqb(dؠ hwCq# H%R e*.dMG.NאwF&3-X \^j@l?W,obѬMr<9b5wr/y9M(M~\=9|j/~u˫ɏj7u+eh]"h]"?_}/>.srr|FmNY5A?ܸX a/7jGk7\jvo?-^|ryɋ~(DŰ>ݟd޵;+ϏW' ~?x5}g,y}}_:vttBK>^7{Y~¥mzv.K~m12J~f=z/;ZZ`y5 wϭߝ oVWYfgg)߾wع_V{~C^mlp?s6m1ٜכ^.OK5e񵇎{-6wBG@}?q\ћ6^_.0I]{ 6_YՏ2=5;V6^=w֭w+ya>z6Y4z+|嬻BI^斿^^?ȧ L~g[}oџ>mw5X-}sH#ik'rv!/rz}koi_^C`Vwg}|O?{թd5(":]$P hAa ؊*ŷ''NN(qS#ߜ/+fj+ƆɁ]U.V]3m(}RQ68O >l>&L'wPՏCWFHZgu䒵2лн)7&ʶmXIt ]r)tU2Mٞ^zR>SP-@,){_L-,>]rB6aIi+Z3М/Lم!r?h{َw7EjڶU1OPﱠ^P.)j@q-T W:Ul 8ldD(Cjg T j cJɰV ą`͹EÈÑom屌!Ӽ?`;O ~ jK 5E6lڌMM!E[`.:E9$'8 ܠ5{μ;Vvu̾ 5>.z~e⮏]:c箂J erFi ;1v]% :e{c38ޣELMjKѻsQVhb"09Za[c ZŹm#˔@5۠9lhzi5ZCT,RyCƗ*-ScjWx2OS?TmP 19psS]sHYL/J"b=C.u{rBVԴaI eF#, J +iA#6uud* g5~6:*ybrx,ufdE;Abj:YEj4`]tfir!]ݘ}=#ۿZ+HP&JtPq \P]lRcT6OPk0/fOaIq\Se̴$:1Id2kd%?!"p#P1HU1lfDlHFhc"k]rR&3૥\8 ~%% ۆ2m=X:z9Due}78d%Pp_٪P l.XSѐ"#&;{lFE~ 1;IeXsSJDKEJHM'PpG}8rwRRg:0u18PSr9g1vrgk{)A@[IݓAt#M0vo/grxw 0g]:4캖AGOa0֎{B=fkX&4lUQBh0 t5  dx}5Y9i$ٻ ( %YeR1%ᘖPJL-*%ߒ9,8~{CԔ6&J>k-%*ǜ0gʣK*JXav5{5˿j!=ey[ue_w|v~*NӻXmvWTXʳVet܀E/><_$; t6+b%Ce˅зUfْ_?{Ƒ au}X$I N/E_)&)Eeh7idΘJ4StUշ߶n2l„f  vQ?:-~u 8De+KW;~QP _~CuBg;'u]g^] E{ǩd? \kFU?2I1 y<7cyXl\y}s78x*[֛xypHS%#E׌']49^h n0a~Y 6.O9b{R|ƀ4esiT/>x~@{f)J"(z} ɛ‡/:Br~}M&{l|!zUGw*d{Nk\UU֎mm@3}ĭ]%;[=gΔiT)ѡ93GsfvhΌhΌRh n"m:yk%Kk Lxrܕ:uFM^_`xmBv0!3tRh9i;]etЕ:L|v: tu\yf: 2硫P.R'ЕMO䯭;trhE*T=]]"]QaWpV+ @g*%+tUFYtu9t$Kt3 WUF+[OWK+.)z(F.}0&փގAqU 8{0/浭o(.!-saM oN韾ŀ_K& alfe~[η-i7F [ϟ" O{lΠDuS~egc2!,I7׃j ~eW|Ck1 ?lh(C4V׺^c_EcjYF15tshQYۻAW' }D?.^6g1 VHQ*D gGb>o2fN% i-׺%|Pk.•fR͙ٹz)Pl'v&6hy3}lvCt ]et2ZNW+]2`_c*} _NWemWOWCW+]`.3 U+thn͌%ҕ!.̀ ]e3+Cƭ]e}0 ѕ9&/; 0gN+<4 $ls](҂51'Lt2\.BWh=]e tutE)t2`;CW]VӶUFixOWHWrB fJv2sNBKϕ: e ^$]q+,ξ[ipIg*=駡K+IYmxϓ2G:)+V~IΓ:E}h\*QX]SyP[s1 X̶zcA1J\ʜhܴd438qU]Q~.Eݻw@ +E(?N`4!.CNxﹲ&mYIHZw=յ?ʋe6q1hvukOY5e럿7mvr$lX?b^աI|<clGş14g *?WKlP=@~1܏R+b~q- 4UHK缵c`H$2xT[ R:.R58[,?LByD!> '~F`[K8`9+ 2L b s:yz(SdR}T559xh:Fk^/@7Q+Uʝv1tv ˇ{IU&u]~۵=PPkQVQ(f5;!>Z~V(,VC;p$%gED3hmp|Y+,gBS1Jq`\AqYmcLD"FPР I.Ho)n50#B'CTJhGˉ)jEp"Y" /i(\@+ Bi'j[V( FP:4Up\:DMJ&CcI3 T #vp/BhFcRw@`ikpin] ѨЇ1Gò,CcߴUdub ܔ+qRz=,eOo&NN}oٶG:p">cu`c;Y ׃e(g[daPVjjc q(Wk@`Y&˔D\~At$I/ Aw9I;_yXO^> lМUj-Sq1xjXtᧆ)36vQQ:^s ҠFO.wyHtAF5R, ġB$6Vc8OXD LA +Tx qăqi)1mR$vㄹscHh!Ka'zd|Ugwخ|䘢+#q$zHżЄLc.11!q0k,SV*(q1@Q˗~\b'_=dZ@;h|6ʣ82Cx~;avܠHhz?9`ic(d4KmSz^}P<]]~Ό.7 l„f b.|JB؃?J=8J15W峟pksAm\t|Ev, x%`+%beƹ F1 +l뼻(x직+wz6n泔eZ:# _bĪ:'ŀc@| DIQ|"ؖqWv&?-uz|2IwӢ7u{G m>-f49/:K)GlOFP.]-Ά@($Zjb}/$o rZ0L[ݟB>.ųVSH< xWX̭mk'VUWEMo=-ʠ2'ZWʎN dK73[;2qKvȉzϴ'&ŵMe(qXzgHA@P 9FbXrƬ>J)7~9/K!Dk \>bTIC92)x""ռIH&YjZXDx㉎F&("8J#(Apʭ18rܢAڥ/+ǯV~{r8]Nzܾn>ڄ*Ln91z&rqvugȎ#R%%?S,FH!,Ck$'Jwd_Ǒ-wdUY"GVÑĺB"&E? 3 >X?8r1OTEuIJ5RN PLr~11]6 Hj @4Z"T`kk6 ;9rFYɧ9K=}֬bpue*n;V@:~12~@y yMCԵQc2D=.nc3Ůth+ XlܕryCVNCItn$+hS6Nu ”uG*YXP⤠Tu:n+ g &֢w[. \R&'K)Pd"R)Jq!09$.¬\QʛAB9Vs}VH.:2y`l(U4rC >q+#zM9.gmZZ_wL-}-=8E/;œVۃZ'5,f3ͻ3dA5mGBh!(ʈ}H6gg 5F F$N9J{NQ& x\iǢ$ݨ?{בJOE!; ;o3d]l%%{eXխTZEN됬UGR`VWV24PbvVLT(BblgFlTU. 5 b;(>5U%$M%V;rxV ƚs'iI=wOMȥ RSw T Zm,Gl6ٱB]V7_ '=[׮lY'.[PXget*YR<9.c^O6m/% #+??9,O+o]E/BN;"me-Y19D%xCurI_AX3KX'O$@OWA]t6.  BR&VIGP9t$c! ^xTb;)F:iȾY7 ژnf!8dMD˜[%$LqO'SމZӮ9__'p)%xP0vp)5!,S>J?4(wTpz ]$9+[edo5f XT5XO}qWu =l߷,Yy1-JLbnYCqL+ aJ֤ŗۙd] p,߲?ji=]Ote::89kU}e^V\0h)$RUPv8g\MLv.otHޟ?G2Woc6KRAb?q:**%)}YﵱSH%=eAv:TWs>4kT s&l]^MU/OgwL~a.yg+q>~41p?/y"^~+/0C}-uph (?v`qZq`A?B 9~)҂u&FU0,`l@ѱ31V!RLqA;̎ "QNN>,c/6cdn)t~n˳Q?(Ĭi!j>YJcر̮X9W](1Wk L j-%i*r"HkVHhOUMwT7qnkq ֶnJ Z_{5uLb,&l96>Cx?:đ6ʃkIEf4vG#vk6}<8{7;?5 Sa9eq^+ZM1Ҙ=nl2*p2?j3FFI@f$)n~̯e3)ֻ8OYAߎ{\ޟ5)~pvAɐ35EFr$!keoPTے#yd83NAEBGs4|o#F?6qv̿*SKs~ol~>z~87"7y׋j^VAggٿ} ᨉe*ٞ2F).KD.כ.j֔mwd f-R]կ_yYZ[Բј u6Im&֏+L-d S:.N^/l62nfdmJSOPoj9BUa0puD3^0-<csY,.Ckkpie{^pt[t@ˁ,R::h$+ uz;p3w3b1_ [P>myUh|֝&Di';!˙[- .mqTѨhJLQUL'q3%TUX%$v4lL,_^NcG߀oj̜|ɾ&_C3r LFjc8 Uڈ#$'n&5 LWtzPdއ#.{_kJJԩ*Mkڟjz6e0u=V1AgW4 ,J% NݡDQx{)ͮM=˂g?/w+Rj X[Nn?ŝMGy'`y2]Ѕ3]=ұ*ʸ\Ѱ;Ƞ8*gDAΦA{g{AC\6)`v \)y:@X0#_dXnj$HYx5~բ[ë?U@^y槟#W.Obl,>ȽZGߟ_֪v']04˘0)l\|.~F_YW-0BVy}.YV uzoX/&y=-߭`\[#,CoXeh?כOƟa\1gY^ Vjuuv2^9;?=/yُ?e8l=?3=gU^ƻuphZT/Ζ+}-r:@Q6usA Bv^:mjX?­}GYUx0h)I`SKIGc(e@3j9ԗLNփ1j*EuQ&YL Z_B.ck9Uvg͎7}q*N*XeyBY\c<$[\7AS"' y #T XR7ZP*u gɧ Yg&m d|Qe%rF3DSb)H4fDKAv)GAT~fslm*SHj6> Y4s=$TA4Ylw}_۴$ƺ+OMAwoo|%fNOszW_N]j L۳aF^0C#'%Zm̨49>vchM%,N$Pdf*$A;b1FM jf7rI >|%8RmN. YGAM7qvO|Xw ڣJ.1ǖΖ)>r"Ϙxqv a*{?OqBGO-|de(w%^d5`2D*V5AxOs8D8Ak\g2*2TXQAT)`S%*mWhTK bS|u.2IX$-Dj3Ǩ ><%͆Zޞ{K?K'%1ܕ1 c駓MdIymg{<fFN#|پ\z囓;E!P'rg+k1қ{2r*ks }=cXs <"Zm |ٴlQ,9`Ik喝Ξ87{~X/w/ /|pV5yC(Gfۛuiy}/ txx`~=v %u)EW*8p9P$ϚBu?% U@=[LC (e5mSp\s)jRU87{l>>B}ݴ^:{8y n=l`JY bqN*12х=mdV}v! 2ԮոҚfR0E`U(>mr>$A.uĹA9};{D<iMAxUTT\3PU$vY;*i,Q};"hBt1<5D-a3gJ{ɑ_a< 0>f](YrIv,l%t.Kb* `UIs# #i _; E]#/ίjisgZ%/Ž7xq:sY@zaG',X#uCF˜HՌ>ܙvlˇb|@ahk5s䣫E</8q =e?:m ԟLpOuY1O R*bv>.dd\࡮,drThaÐ ]⹊0t>ե4=maO((P9ٶ;o7\+.}241OEaB۪xi+Lu4ݼ9Nl~|LN8r۷쩅_9sw-Bd٬2دT:Tq0U\=K1ӕ;HueV*:|*IQRʕ U5Ԯ-1fICͫWSUrv?' 8=-S)OvQń63e1N:52zV^ǵsҍkN^G&}^/bV fۓ'zgH'Ea Yt2uS2> Tpp^{wSW_ÂmQa}Z< [_[lOkO4ʉ-"`x;,Hp("rRۡ[DRsț{߭\ 6LD7LJ%/Z _i΄8Ćá=ԝ6c(cqd䚙$yP6,u:d$3)۝fr^!FeRfGZG:Ec@ ^09+î3rgӞֹa/4Au=mSҧ_&M $vZKgŧ?7^JYkjc efPKc}: yvQ/ƻ]}'TL\y&<[ 3bo1{Z{rz+|"gJC+啯t3V3ơ|E9ODym)'Z{A>|&U20 :*+"dӡ.˔}QOOެͰ'N3 Z4xk)՞i/A{3)V$eZڡ>©oFƗs:ִ}UTkAa@V+LSnru s<BҏXۚt&]qo?s:ךI+,k+Pv e2>;mL@A/ߋϿO05\ NUxB;6e'9$wѪ(A{EFe\vkH۝:xJ5f>E+Zk6LdyFjQO.{ lIԭNZA)'hSAq[̼98}@qy-yxB3/nrAh̙MHG=𷩏)UG^ZgV)oGaVw9"|wʧy?vu_//z?:/o+'0*֝ZK.` F'u@VL]5+ 4~}{;jq^{"[I~| B}ieײo\:jm&rK>ҡ8K܇:%ղS{ǐ߃V+ˡM ܵPǪݝNbgjfڇO]l纯WW/s[kfk} c4R16C\Jm:hrr> jk\nckͱk5.&_0o`u~MrF'1._|Զ'm_ Ҍn>Mg+m6:dƬǔg|nPӟ;x\(]I+q3kWK&JFXW$YOf>|9$+47xy_'$fux[t~ScHۈ4ؕ,6p,AtɵL/p .'meBOӺ&`5Vz0{xޚZ㽺f 5pͼu!d A*M>I>={/pr:iYj>..uw6dn,r0u*UD#꽐ŘO<]\fѮtb3<Ŏ4^|V;}J#,{[\{ܪ!i/)HV1hA;eaRvZ#P֚v̎00 BbJS ]Z}+B@Wm&z@GR{N5/]u^Z rBWQ9]mF\9SkU ]\+J+Dk;]J] ] LAtE(] ]I@t +)ƻ"NWRfvE,)f:wBbemC+!ˋq=5b$o'$@ G_GIsJ[uyTYy=RפM|x %{б(“:;>N`eA@v q`l!6?N+ Xd-6VDןjp S dUT\hAd}8ʁ؃8_ ybOQ| f1+nR) 6ߕS0}?y[G+[Lv4QW^P# qZ[q4^M#Jng8nj^uqP7fWRB3dU{P*5f!)^NZ"hgMI A9tEpu1tE(+Е5q(wE(axNwtEfJ$ r^LO+ z5t7=?bt읮:js;…==wBV]tVr)eAt,*a}+BB]] ѕ,dw"F tut%JDW]ue)tEh{at(C+ڃ{08]%75 ڷv* R9(pIO:/w:н-LϫUT`vU Dm lz܍ McΎAYi%0gvTt /;SpՋnl~AhE Be ~++L1k8V>!,O+D,J;'DIeKr)]R0RO+D)@WHWIJW Z-NW҈ت'dXp((ƻ"kW'dĆGD``sZ_i\',J t9U'\b;]Z}+B7j+1ur++.}+B)@WHWp1ɧRCWWRB+Ba+ŝ%ؙr+%LB w"jX:HB+( H~XEl5fXS`9NVTJꢦY=i)=:Iv5c`l+unipaeCiƀz B0pWLlFpE1+߄VfR!6;L PRCWWR*w"Aҕ r!\]$]YeQtE*+&D> P 5ҕe=0bbZNWY䆢GL4U7z`7fkW:Xϼ+ف@WۊseeAtUוBWV  Ɗ-vwC9AҕTpV]`+ "z+8Q]!`  .RКޯ]ʕ47SOJn:&VyN 'E} _)ѳ(▮Q#P3lf (i}b[p5xֱ{  K(>2B 0ҕќYS]!`i?BYr& :@Zѕ J$BWVpw"rC+zJ+yOp]Z+Bɇ`ЕP'l@W뽧Kv. }ꆲoZTR]m+zn\It%W ]ZNWҬX]] d6tnpm1A*Pl/D{̗=j{3_P$\Za3.XWPԲGcT.HQ><07g-eSqp<4s|DW@ ;ʑK򶯓gSӗ7p^}QGB_}\䅲/VF20Q,a*e $v8S nT:ĨL,B02h 9V8TXV9˙JIɧ*,.Ru ( ǕJ\`73ϵw;^Jazvv9Aj. b-6UWQdHѓdĕOe&Q$i]z BԀƌbBhڌ>f9) ,YC=&M=#QSTR\e *t`בFڀlSL*!QDD` ^EK7# Vǜeg )zX˸3N"'@xd8Ԗ}g؇tٻH++e 3dsmeL2Y$ j%4N5$* 4Ps$uYY^d рOB.}z9oyUUT/yr}" Sf0j$7.(]cW@^'S\k4л9:wrjkeߠg1X6c@FUkI&$ZPR@jK+}RƗ ZRҧjkѩr}wr26z=򰡥VnYB H܎U.U{V+= JAuTkXZGuiENeBф i1OL&X jS1أԡti~[~\qD2.)VJeWz#QA"J}W:a UlAEbr2uA Av%Kkzj3|W4r e C؎[@C&X !a@Y03iNwQU~yRLA5 .0t֎CC\Ә\ `.ԚIab@9s>CA[*MC{٬A]6YH)(J892jQp5 ڳNw,Eye(_uu+0X:+_PCP(uh 13wT0Wz,ILw_W\j]JDāfMU0 DbtۃB6jp*{L|\gF(4yS/KTcǘAQեTei58!a%`#_3}pwy[s]3G/ bc|`mFL`-$ >:gAuP<n9@G6ޕ&W$];-6v 2f`yG 8X 0_nU{{[ɐRa2QӚC5 !XqcpK1Zxü &(!t_15lm;BTm:9puaU% 9աQ50WHugxӶMZ|YDa4`]d/7+Xwuv)Ru+`+tme,c;KrIt ȋU}L! 5y.x WH% 34k ) 1 j3`1=ѫV *5hv5+ڱaZD 0PTC+P:m1Ѫ2 aہI(yh0I7e#[/,.LUIbeUUQZ|M!jPk {ሜ'z33`L5Yqj(YW j-*E4?V=i p j3k*uޚདྷ"GBZ&퐄M٤$@}AFJ`F\/;O׷]'Minozv&,P`0u F^,4zV0ms> >;ZeVSGj5Ǥͨ0mQHi]،FXtT],0XڨbjsGh <4 ڊ[;jՃ*H]6 |t>3Agg &Sp VQTX]ے>޳juhdx55"kX*~dMpڠ@q "UXkʛ; ZvZP>hԋa:hJ7a#.'UAi#8 N’ # @?t|@Ap*(Fr\U;l0c-ձP@̊#)S,](ZX-* v:X;ע`|J,0H5zE?݌OwqUUr2{ {>هNX!}q3~U /~}ѩ f 4׿||O7A.wkvMTbOctћ?g\˯&\(/D o?G5~~Oono_Bks#__w+;yr޸?piTEE3%Γ\$TN_2$0 d5$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@:$Aj$SZ?M7K@28Ic $$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@r`7S' VӼњ'ZIc>IH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ tI v3%P⼝' i$L$Pj#IsLc$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I II|@Z^p>rhw=ܻ77x#Њ%80Qp)x\Lp)|p)l .Cp FO9WBoWTdjN'hgOޘ;LG|H]//h]7YF[qw?T&Avw?\{(OtQ*߁+U`3|J{m`~;|ݫ3$U4ȩҵ0ȪOܾh_~ +vn<,o؇4]N~r{Ѫq.?՟=>uo<=߬_BCob?4v.^*??0rp=&¥6*D!鷿!`&f.q4ތz77c1mݛ~wś7贲y; ]1D[+g']'+4tu;#m[+ J ]!]lp_6fbVBW63-tu>t2&+1kUQ:GJɻ85(&+G+jQ+z?NOWK'epdt Nzc{WHK(ԬW ~bitҕUtњi4 ]Y[+FK3A&7d^BW6ڭLΒP{~w{?0?~kgHŹQw݂9_hXr2Z%F?zߵ{WU*i? ?™z|sqdZMܩxiv&{4v6C8WL'?_3`2e Qg2&h"b䦡+,thOFY=B|*hiO:]1Jkΐ̤8n T:]1JㄮΐuEc-bz}Fk7(]׮ڗN>_ԉ5'2Pڍ ]}k] ubn ]](< .YцN :<6}`o4t!,tƭn7+-}`j\o,tn~ (BWHW66E«^6E 6k6/h?F˻iy7͢5nMa>)"9ϛ–ԓtc>(SRKCz ;G^n"eL[IõMx3}6٨7c^-|'kL[I zoFkQ\:G 䣟I] *sPڨYѺSㄮބ"F8Dts z; ]1SnJ$tutuMDWLv+{g.C[2ɽ]}Pb8˅N/^7KbH]-CIdUҡ>8]prnbdf+FQn-'t&te~ GWLNCW ,th:]1ʨΐlDteSzup[+FII y퟽8a/h /}hXRrW,)<֗]xA&Ĥ6Gow/hwϤ2\ʘiTõ0Z2Tq"`af+F{I-C酮Α|4a~W4EW MshzbBWCW!zcDtŀ\7xjx"u e՗NQi;]10]1hg+ՊNWR3+t4]1+*BWTQe(ΐ?HMDW /^7(? ]1S]]ҋ:KFyw,j׍ٻ6$U~06ػ .[0iHCʖݯzfH%ER#IkV{)EjknS 65S do8hj9KEIN"z  ~A5nۿIq{b#XoU\|3V"|Y7{H1rFJ`L;uru6oV r Tu+RԕĜAW@.egN'GWJE_R9+bE]DSWW@%3Օ&Xh}F 'lbW@V'4!wiPuz}ca?{0r3Ǯgr؎c}ҝ:v1X3RW@0lU"WsQWZ%O]]%* TWhJD"gjsm;SKdwQW ]%>uUg &j>uuT$;uHyFꊥ٨D.=uURtE+AtI=75n'uJ,kg%frр Mbj܌rH [ﲒgQbSʂKqnu qf޿zxn >(,%؇SR%,k^`{,}\_лTMB~Mz)yތB3~ HQau ;]oUQeo!WܱtK? ۼzy(UkJ?/Uo1.tb޻Fƺ\Q-cTɽ;viS+~)߼-* u^yanR\+(ĠMv2.&&VQR1qV{tHbaJW)yӊՋ4ws1hkx,Y~3-fxbWJ77U7uMǣbyÛkuީ žEa dGtYI5_6YxX+ٺCwv] ތAfT;3%$%ثSbn3@+@!ղbeϡ|}L@KFP_,0SEQ^&̆ AQ ĺ}a޻=,z ^ya|J<`G@\M9AO\hxϮ&#:{blJa^iGjd^ 1[,} *v^l1Ix.;gP<-ͮgKsq<囉d:i 7 (SIX:­V,*%N pVQ7IlB Oc*5(K)L1Kȳ]k< 6 <WPL ]&F<<ߎp3?=HdO:caZ6UT4-'!"Y\#YESBA$> s`Ã̍9@;Vߙe$Q 5f]KB@6+橑 c<Ә"D{ RԔϰsq 8xЁ2H"!L;Z沄HHLP9I`k'a#ip-iLY8]W,0L(b΀;@CFiG]`PLE~ r=sd>דcq] Vfi ~o$j/0;)t:6/*X*~_W'UB@zگ?9L"UZp{n7Z)#VmCJP]MwU+etmϓ٫w.{<_٫_a짓!h,~fK#XӚKq'|%uYf- Z?'_^ݫ;ʔ s}8L6+X]~C-rzK1uVn\,|^ͻS=)Vz){-iZ8m۴.wñ__m7%gh2Z>";.s,Ŭ ]DlMN=av ǔ yb-}.k#kOc9W7]d݅/f1=ѼĨeObYdՙ7ZYU0.V-MI`2?Rim+{i](,gh5i7 0`BˆzEyrcQo%YKq(6+Ǚq!.F%x]\~ɗqu9,%,yWZ0.wnj|"Q]Sk_k'kymEMņJ;}MpG{ (rl)x X C $< a (Ɓlw ǥ vj(9Us5h3FxEwYjct7UI>r} xFf T9tM$I0;s >]~b}[je}*۸I[<3A{GLkJ\O$g[*];\~gz7PF(eƔ;NH[`U<)HT>^lNGr9>gsLTrޝOGIvh˪ [ʡ$Yτ4c\h2I)vQxOCILJ呴JpZg *V<{Q!e /ՁnQNai>CQ{0rHRIpxaR[ERcib }yG]Fݞh~1L¼qƝQ0VD,{j4$34PimvFq'38/V1`\sXf 1vޡXz ׀t>h;3vnp=nƒ&E<|>%"v[ dMHke̼\ ]3׮/*<}jճ%%?VÑ%zlr3j @6:k [7'6ݤʼ5&59O¿[;-|066;\I [mߣlܟ.i#Knz&CZۛȝaΦfV*Quq?ǶDC3(]?һT*QC{4 d%[vޫy]ʫL=jdG}˷c1> zxOwJ;N+\vYtVQsm3i=FV!WixaHsEڔRH[|zro81Mu !7 Ҏg1bf)EQj1w!' .;>.*+ 8di/`e웟~m"%M@1QیyC3Ram%~* OEߓY~B@e 6\ 3cNCTfKG(WHdH%vOG XACQG"`"RSF@ 2#(0P[nȶFΞ3k\6Mޞa:`6d6ty|Ww_j7O;nwZ+7TE0{':Pe ٻ޶$W?.rT@b`mf/YRDg0}IYEZ%$aOUWU_e"1쨴V k 3%IS.Q*yJbj-s^oohq.~!깲mQ/,c[2ŮOF Ǯr }A?d4;4EBZ@uWxЈQf' }P"F˵fC*8˭K0`-.Yr}K%ٞ}r{]RvZP[G1Z9o5ȟDK{s3l>tՠK/U$ߐwsejϦK[cHbіfFeSS$96(hN+*:Yn?]x< uP9zU9v Ȟu1F+U$8mAźs{R|`VaRye*hB>d*P"9.VQ8;1)[(\kJ0d,;YwT⒦& 'iIr;fQ3%ABMPl}n{-6 c!Ґ& j;n``J6`+ά 86T,XY+T@SW%AV'qUn:BlN;"meѭdvڦ*K5uDu4Y[ib|Tg:ߺ|[QJk.J4% J'XG7 76AxsI !/)$Z*:!aJ6yd'&zދZ%+OS>z(A`H )jiQ(G,F;r,4A 0E@# j+j(j}.ӁE:}߲ l)$u(bLk4NOŔI.,N3^mYvӉey{C=]Oti{е~(ɢ:y)VHLh)$RU~P8գ_M̿_XxWuoֿ:r h}6\%r"LT9aU)RRLd)H%ȾzYA `2*i1ax畭5s3E3Ov&¨9~ OoF tu} Ϳ_: r b-uY4(?v`#pZE`K剠S!: /`^A0l@Ѿ3V.a)!fi=J%Ǖ_f -@g]AC=Ӈ䱦tb>DU` ; z%܁} \c_ NQc]ZKhI %6+-D`rĨjRs_^>uSgk_S ;o+"Kɴ:y&b/%&Mۨ^?xLjC~x'Y8W Xן,#OI{DAx1"2^h`[bdNY}ʌKaq!$y?`՛eU\%ow;[72"~ Q]gccr|>-߭ʵs 7p Vx[o0ϖ7ٻd,8;=gKW}&>\Ѳ_kRx%Kg}kQW|n*uGc1TKo\m/Zh@1)/?"JJs4߷{۱{hD+kx3riDq۞3c][ ̏)4|QqAkl&%SmIT=Ze=2k8ߡ*]w}CfZ<x*::[ڢ:`cȁQH!r)ǀR-%D%P%q<ǧyr=]y^ q F{0::%)_F15`ǚbd8y^mD޿xF>'ꃏd+$YRFXj+2b!8Y}7Jch $ԭ@Qop] R' x5z:yξxnYNuc%ci.VN\Uc1&PF&QN2wpweS#s2j,YʛZ(D4d7kw=-ޑxv>sc,䊚-_tʩ>2r^ķz|AnRevN݅qEγm,oK~:yo9C?풬~Sj؆vKw}MQ-5GstѶ\'ώmO?4'ϳFZX+lzאǚWDbQ>,mEGMm n{ڿH8JqYrYtk첵mřsYVvWo(YZ)fjx|4.oM,SXn3)~\>>B 0u-us9u~IH暙)?N#^j@u WQ}'؉3lȿ:aV[x0<kɫ7XdI?L{@U 5%ZveZwaGyy}ӵl*l>HqY7Lj}z}dLxWwl|&a>ޢyCݶ87!zL;!w Yz)^o_xvƝڎFJO^Ut8%t:!Z-,uNXBkF[\iG|2pt (p<`BMX9rqLFj}8 :Vi#~kI n#MNka}}/`JG\S'\ cוQ%)g.:XQcۢCb>?vi` b'@,aIa>=;s>b= o7ٕgYnE{]-ؕo -']SQA.G X^:nvn*L=]=10[p}-TeFH mK% \(z|'y1;0g3feB a7{sL(ئ0x- 26a|xpkQoaeDL䋊Igg(ϥ$O!PHqѱPW;)b0车R1*[2q`9L[bBǂuQeU8;Fŵ;d|1z)ʚs.@sU'@s5hD&%( ;r:EO JNKRJ\sc@)8TE r!$L>7^M4>g a=k *H,>5Ð5h4hHɰ?*XX$m m4da]і VBV0\ gq!Ԍ!EDv']ABaL=9BJdz'?Wb1`[δ14s^y)4 ^)AP \ oR >VcM59 D\euZJC ^CLSPZ M jfZ'CT(Mx0*qHls$H?GdR jIy=Jsljm#'G0ஏnlӇDo\'tD(hH1$ ^&Jڧ`LBF=j)_# M {%`8AAg2*2Tm4FeeFvEQkm[("r[b*TɨYB%w]Q7qnF⡇K8no 2cC_>LRGz 8|"yQȏl83[op&/ ْ&l P'+:$K*ƄEpP`" ' ! \wE*a5A:圈9jMm%վj[-E̦wfD7qv̌w)˷A;A* ƨ%̵pK0:+*h6%ؔ-@Zݮ (j)N,T+QWI:`1o,$Rc A/qn6t3X*h_:/'G'\ɊN(:iV yFN#=|Ȏ\z囓;EC2B1HFV JoҢH. RUfﰈ}= \U@E jSRH}MKvFڲ;{nanfξMir Ng o0Z^"oם0Ã{Rrّ'dMFqAI) ޙ6m$i e`DƬQ|9:pEMPJ+)fS7ؖf-W:fBLx bJL*ìÿVV*2wgT:Jb# "C*p;p4#bU G`vi_$h<ܛ%%TM!c["=ѴDlKHшI0&IF2e"\&J$)KbaӘ(,XEt[^ZC QZ@nGm_i i`} 䑉<3I4GɾMf`8]lѤνyɖ\{&IŖQJuD($|AJbHbH(3 o>xjЖbSx7ؖl`MܜO2s}nj;E(Jys)ܛ3\|!gU^邍N:ߗ |BfЊ~ ت'<lEe|h] cCIrܯ/]G^uqG"ѴonCkR):6ކyejtȬhQ"CʔJ%雝| 7׍M F[iM,=Jh'!<EdUh2bmO&Ͳֶ=eNXy ;`ISS顨 R|9ڶ_r^2ZӍ,*̾ҘI#&DM60j rSgcu[>C5R~(zY,6ئo:7G'r4K.6Y>woP7w-عm8-!%.vg|6 nYy,x 잠^d,xPEg/:p3t"Jh9z)@-q (ϋu,y˞\ ꮷOu~ 2V6%KT(W*HQ:&-q?<9bm7xQuq)}{Z$'m$ ǙZz`F7ʥҗh 2h UrFkg f)poprB?K\)/~*9FfM6WX2ͬnf9*CڙUȶ݊{A1''9gbE3Kdsӣ4 '@ aP37/`%7޼Цf9f |Vԟq (i_pjE)J%Z\!4ׂ4 +ϠuO=%ڃgWE9*Osl@\\͋:T UVjqu>Tp覣D_rY8+Yq:}7Go]w$3 7JBIH 0"GƖ].g;į'pMJad T lGi `ve%2|@BAf~^~Gri"N1%oq͢^vwq)dR~؈띅B1fܝve`Vj®Y]UVzW:nR)OW;aNuxasj5/3˩+,e*Zi5'hC정5k-*[p Ok:THU~śW |,OVˊ\qM[IV[pIv3&/6XI nz*,#p*#TZtb-Ӏ-ܧ>-DI~{4_!<ŧ,c[6??[0HˇlrWo:7ףy𗯥Y8fo/p)h~e KFBB$kMgLyulr?C2I&p_卛랑teQC)40C}Aø^ё' V7JLJDRʭpx!9ha0r:X3 "= koʗˏ_6$✧C`o-^ioumZk m 8>ys!YMߥ\n*%KR^}5Dr+2AlBX4]ɴ*vE:ve/chvw 3l5WD/rR;Sq=&pzPz1UnTvBm_Kd 7~xངr?;/wzu tZ,l#ϣYG޵5qc;"U!c{j.q/㬫mJIJqf*}OIERfCp.΁Oyb!<tI T)%PUX//uƼA evk`aj9Ozd?1K$x%~aMha ZiyQk:n؄;Fߏ] |e6M_/umJ PɼF `%5+]͋vwuXc>GSz?>D֮lORD`6Zv` KvhW_8h2,v!J~ R񔥳1(\Ʋ6)Qc}i:+JNS[zmnlUNеdm#;8&ļv!x{@zJ馐@(G\lN1m.xIY1E6_{{ěR}IriB.hQ70XKgbH3 8 Y=Cr\$K1H PRY`Y-;`f"aYAxO}WA.IA)CC!Sj8!em5n( g쒂.):Y䬖eX-XY`6pifes,\I0t&D$dc`8e>|NT+kqb293jEq Q2\?ML'KN`e)QF..9SXfb"dnydpX0A>lb(}U|`U"kn Ht%LimH/'\x uʼn襇RiKd^ "-3XRJ"/EU8T=:z|u^*zkUˌ zb \2UiTJqi<' Is/@T!&2 bQE%jC8dT¨W#*VBxg(Dbٔ4Mx0eS,&XSHT(Gmp cd > :b^iqOo"N Y-2WpV˦ wTـ8e8ij۲PBotE8IdzAFB@MeO bzZPNڲEx(Ho9e$+"dSzo6餐Ф:2u9{ZI ,cp:5, Z6 MSArVBJV7\&pq'DMr,+0/D$dc3$x]yRhW椄T`ە׮XV%SKY0W˲D*å %{)s Dq=}Բn*B9 OHS2 )zb57!XL/r|馒1uI-O)>O*7O-O`)V_٣sBס^){IyFQ'5G5¸)$^N)cI8 Dp}-pev~Cca;%T*p9OehI \RN -RAKrYJ*o:Բ~RB- 0NKae.5P'\b{pNN7NH,+6`7AƓNBVKј8uNc/Y.9x鳬`jQetX#$,M%t.&N`/ I))"QpD'b,맩4ij‚ᬖY-K2CN A¿n0~Z־m"gu2sf$NR0!e+Ս7pVFYRk.K{ekJH(Gp)_R7Z!)?3`(=WBnBn_7Gt,c|e 3UӚY6 z giP8._OfYePAb(D禦\`\`⤱K"f6NTkk8!e 3rBsYV`5Y]ԗ:kvuM5]jR,K3sW:Qrå/,w Pq5EHVĉi`%{r)IA<QOe)1F[~.]03Gݺ(&7i~ 4@\|OpWjeW_07uz_ԎxjNj{,Gn}5s|^/.'LZMt~?F|1z ceʲVe1nwAL2o׮:(k(nQt5Ng=>eωS4H-!C=yT& í 'or~@2sdY;Q0eFQ\LX򲺎 _I9lyӲ x/Ꮛ8 Fp2WZV[jNᬖ2JZ2@`5p܀bs;'q*MJ3e7"!F#lse ZᲦZǝ 9XVi5bu2 &c톋" b\(M\dfMȒF!{pXO8D=& Fc5ȴ9k߭ˋno`]Y|vwSnњxq'KY%V8aYERLx/we}t>E0$_MOkk06L FU.e~*!Ta&yFƺT"_\Zը8XQqo/ iƞK"X9Ć'/i;1DK 6#0R,tp 1rb֝?ɍd@'ʼFGu/8`ڟ.jnZj>td3+i9~D VoS Qܐիi?@ߗmMKў 6V`UoWwU.w3zP|q,iy=bKlz\_t+Pr &uӼ*V5WPC鉊<'#T*r<'%(H d[#9}R6˰:geXvp"`J߸ew x[榅4usCtW .QϞ9.IQ#ïNB&=-9m. !n%FYT 9ef< s¥Zo)4/l7c߂I1Qȡ'%wl/N.uZX^OeWa*L nU&[գD-{ݫ] Z,z6peIM1JQ7$*Fp!FpNBYv榫0pVˮ 0b 9ewLnHRJBa|N8B^SV0*fBNB*R`(/{x|E7)nPc줐c/nwDHj`f&a3PrUk|i8bO> ߺE6mO x],8kG|׷~u9*։ryy3cENY5߼I)| $Uʱ,Ͱ"9dTqxFR1p. O)a T4Yn)ӰA˙h{p|[02übjc/<[ $Mĩ$!x>q:(տ3 -΂Hh`1EZBY:nlaK_cӾ]y@@ vA@dU[&ËAzm L "N,MTES 8IF[%g?J-Ք_Gj礐 .zJ R}Y@zYp geȚc .$Y҃-Q"X0 w8k},jV-,5 QwOAV0,^]s Zl:&]@POAzϼ;.VIQ47OufÏ7My+B^}L FdaZþnr5 I9>ϗu͉č7h06ûgҬ3xn:͛峻nr"u0jR.'$\.*U^'\0NӁq]=֟y NmN RvȨogG_z/B9^4~yvxx#WgC_u(zZ9һ5,iD.Is|9yDJZ'UuY? d{99(Q\ij44doZS,ˀH* <J$H-qdw,f,"fL&/H4gRĺVJLJU%:EQ@NCX#i,kUXti G]^DL/[/EyŪO_>vCk j8ytg96{WB2ŋ,Yo>ݦYry.suū TyX|lI L3pg ѨYOyZ@18hl/O]VTMo']a˟'YhY\%|^.z&xf޻d4v"=ɦ`ހ+\c3%DEk=Iݤ`CKkX]k1.1 ڮOoe4^qv]T"Qtǽ\ǖֻaZ?ȈϯFDOQ ɯ RГ6_0ZL~; pOU C7h#hp7߃, 5;j6r9ZR>Ȑ71cUzP)1CׇB0uuR|k Lit&X/u#ZiL;Fku^$;Oμ~w&D CŽz#bG'dB A'$ٗn(>ٲNCeL%K> W M5PN(! =M"~InV_X |DWyH:pNxȖqkr`k~`1e%h'bՀIQ?{Wʍqf`!H0t@wehjZU_?,m*%.%7<ȳź>cƠ)YPV/WFŇweٜCt{!ʭRH!gui4Em3Mk;LQ]NM"͚V)V$IJ:7nLBQL* $S 6e"ܖۧ=P!)>MSD%w:yn[>P)i?/>o& 26 gIGbT BRISƒn{O4lϛ y{nYP%pYD!V׷P#nrTҊ)c< 4~$IcybÑ^ֽkm0{"x!S(r?G^R(H~ _ 2˒?6i,u)Ht[|'@u poC^[ 0"Bvyu#ʌ4h!y}aGp%-<*cs&ÑFoxVWIҟUG)QE1P7= Dݼ~;tZ"q;+f .B JJRU9c 9! 9q7@3ō* ií?5q((PZ$He 44I6Xq7Ag2I-t 7Uxxm!4/KŋS)W)y+ԍ)bnRG Q # hc{0wzZB|R'O9hiy.UTN̕ݤjc._]m 1Yn &Q\V$d Jm _J9p=!LB\ijHTݐ8*KP^9 a z'KkW#;(L\aPOuP( 3eDEB5( ^S_Ί,1,)=xgtk%U6u? 9lc4Fa4WLR?^U5#.!(CBCXp^jYE.w% (Ej.T8\!UbՒ*`M8 rVZ E@B3nM,ޫ C)C(-#z_ǔd 0'i؂^q@ ,`.0T0a ALcr3%Y1k7K}G2f5  /uȪUUUFA#zU2 ]%ePA-BmdQϾ,D$;MY>JSvnVJh or@99grڴ>]4|3?sɣ)Q C 2d%ajck}vp4EomF9+IT! cCW_2*T<ݩ)Ùdo&-xXtuJZ?Z5߀#vq%#EjжβoNuk9ê 虡_6ڮ"0?ugl;/Df2 XQvK2!flP >٬b^~2ʱK$޶^6R7uZۧAlR503dFo1nnE_N_6syq#g;)RfLK1#G7F_qVA(y7d9 m/?Y;Qv[/r^6!"DOZz\4Dv%͎<=;tw(Y٦r3a(xeJSY :;OW &;VG˓93艊9\Q: @ ur.QLuaü^OS8A~4}Aq@x0+?8Qѝ2YM(;IGU5-)KX93RGTxiBj\aasR)oW'^9z}6WGnݾq(%^WhFK5י),9-߾6-k*P%>Ipyjirqt rվ}mGA+RHBwZ$(@ou}.En2!)wL{aV" *NsoW`ߧP9K|, he2 Ҏ".ZJƾ8 v:eP^,S. 3 ԢR4-Pp"~Ƙ+שTBpqA%jđgllRg<8-lfnsY a]hpr BE}7:PD;t瓦_7gCQgX}t|~64E`< +y[ pv?40bԃf6b*I8W~1XY;"t2[#@ڤ9vaLXb_5:G^\ȓ8$h0n_OdžpF3tfEwK40)ҊM{sMUtFiV>N'X1S\ G3"AaC sWAAbP@B#b:ILPڡe *H+ =.ؼسj$߈m\"!9qiӎ&L8~Ss%x$,aC&rs}2(R@KoAd B Q9N(J켗v#Ns~)a$, !RR`. juI(E0rܩʮ0|K?jOTR1߭0b/CKzl34H`x^j\ʠr]}l8b"XI*!TkpIbZ&ǎvZW|ംx&h((zN˾HԾu+p/po}cl~.v6;*h(TTW@Cy^ > ]>]D 0 ׸}MPJz {W5^7o?wE =y(: s\sI or s􆝬I^6@8ʟ66Aq9SBpҢa|H~HBXjF-r_&Yl>H/^y7f)$Zkߊ"L}c}bgw;7ws*T hu&,'cV\fOF*s0NfFX<ɎvBU{HsU=Ub@|8uܚiSJvs%v-[V]wX /aix?"B@e0TE]b$y Dhl ؗ?T̕V礅,d|;ZٱzI WmFᐼx 6ݯ+DX}h%,OYR$9JUB8 slMڵySJ3ϵ`ʘIXI|4L=jKOh ; 9(D}tD}LޏVe1l)WJ2lw~0/lL1wsfd7qfqySGVү^7u֣y}*%f47Skd%yjTZٳ)@ \aӇMɇ<ߤ7 Nvc!!eEKl(R3 D}q..+٬wnd]K; -܄;3FepwyR7Ox8;MK% 3*kw̃r&"ȇsʇzj5;%;P+pFo0ZKE]tzqV)CtדFWJ.$nK07^N{c3b7]j@.mj5Wv  RfL[F!҆9n-LZu v_z;ugp-6d9 =.?r(TLj-H!0Z/楩zb(( zh'Kzɋ1S0] d+O K=aqz}OﺔlwR.^mw#[a>FFٛmZ+5ѫ?c( E'cDȥkc$F@ 5ǘ cAąw Q&нXF{ٽX֒6eϗA;L~omXr_O[:3eV'e<Βr+xGupz]0O8BL-*×n1€2=,$$r$gv qᶔuҽ1|$g;QCMhp I~;vˆpĜYNhǬ^^@wctW*~>W-hۛbk aT3)bZt߈.s &U}MjзdSI1P.Ws0WX6ZL $4倿K=}EEcp-©2 l>N*CIA N9|˱ý>:pB&3 |;ל0tN%k<5 -,Ur^"GNFŋkR\U$<fj%;"q]PQ dC B8$fsp\#JL) >^mƑ>9=A?R%[J+3SK)S LZk1nT-|qGI?SQAMNi31IXW;R¼N:`w: &0r'Ñv)Hh$ ;&oydƽcYY؃J-+ͣD3©FەRG~pH"3aRB"9TuVi iP+#`<jE$Vu+M =Yy eС8zտ@KLy4ee`*Eaz\ ǮoNY %R]YlzI0"]-!YJeBRchEWEXr#5Mk)jSkcOa L䵟B!5${3#YIؖ(wUwUW׻ٲ4NvzqB*[b%պ ֮rBY+d@ }wR> Rh2)Eҧ$YTpFgوJPm#mljl ;'5I%׌NE8&pΖX[m>X_uOrb֢jjd*B5&Damt#wkG~fݩ/ˆ+LՋl]&U0u"G^ {RAjO\&".cytV7Ww'jWZwpOH֚V,= u3ؒTװc~~%޿-d!X.dm9rnIs^&I5f'`ږlcƦ榃M턬`+!:fIZaF~XS"{}Uz,TNeW<KE+7StRof6)}ƥ0P qO +R:A79Ul+_nkRWZU߸z7k7,&ZɐKιc㐷/^|__GgٌFO\oζOi܍ 9v g7vZ>mQI;_)s'ɳR-T^[MA /_VuKk;{T>+n%D U{nftr-Cn^P1V|8zy㛩ׅ뒻++qmcE'Pvsw !2 kAMtʖLJMYOcoL9T$XU*r9~)g9VK|뻿2ɓw?}zi8E1ُ+7|=<9rVFR--Nx0]d$⬕VRCȪwϗ0w? {L@Q|>hԭ"ch>tv[~3.v6\|y8p67b1_^J14,| \bR~~^~~+W_?Fc+o#4  4gW-$=϶qRR) 喤V]g"zDbZSm.p˟M{[Z+Q|3"ԾSXwj.Fx2rҷ_azx:v{Ta2vBߕ[ ŵ5q! cpK6Eʘ"cGTM:*}*Pl7@cmtwZ=-*cZ3b^dk| Z8zĩƌ)]ʠvE߹3bӉ6q)]#>u?g`5{AAvScSDu6*XAP2㞽K\~W8ǼwL,2XIj,6 c:No3FWtnm4wR+e${c_IN;tJtW:c)4!X( Fo\84Q,, QPS%VZb52t>q46w?"'*$ z>N|N 8uM'WtRz%A %8xZ"l05ϋi ~mUt)J -;Y!Q ܣo>d,r Zۉ66!Xm&~~pdba׉jcxd异N~>Dr~ˍ!!!!jz;, %$- KepC!!*f '~%wXj;D縛IUq 'N}`7 q\@|c;3 SO%`貁Ԅzui܃gC!۸79Rͺ7tr00 bM |cN_S+ju\t h]{S;V Nvi/n3Z A Z&v A%%=/ XNN9Xj1Ç<"dڍ8Q1 l7Ny $\c pE`<1̂ i}ܪ?w$Ժv:1_t W\K&!?|P W-\C˒x L>P噱ж $0KF p()Qn., 7l-IL >~7PR;.XiU;\W% c t)zϐ:DA0 5k=ͫ;t\D8sl3rx{+[QR-y VHʹ'=\ۿ1ĸZ)ZűTmjX];bt79q͌y{`!md'8(hSZBZ[҂68  zc+ɳ ȷp`c ,D T:QXx뉥6G)(תϫ5u+v3>aTsҴ KI#K. E-m4`(bRE(T;BD};urY6gJ6yf1<\H]"dV*#zʧ{)I)HdP҂|$\1L4\Eo$1eО¬M~y*{Nަ75clgDݤ.j'5"= {JX"ް:SO ζG`2D0CL(yAEE651Q K+2YTR&Vy`.DI;WCС>/X?O+=leN" N~9zC(GCGi@*Z-wA\E]a#TIH- /.;Pƥ*Nֹp|:{ da=3`CBO/_8ͷzb~{=>c14ƗnN` E u'q_17O n+Q-ja`0ny]߿_ƫ]rO򆕓8ּ~FMj!iM *.+F` e1سv(0mYBgnK3.1fFp[`^4E*/j5yU2pVb90OO"9S YTk t~96y3Q.HC #` qp0]wCݙ@N,ByUMQY0;/ nɖFQD0<_<wJLfhomc޽qA0p F꾪&@iJ(ק/Rxӂ Fe g 4A(xF8βնOa2)eC:/-wJ8jz0;IO>e"O0RPt\9t$&*4HX[\:ONǟ4rwJ+Ds\E0{HY+BXܤ ]G'Jkld!AzB~S ɀ^? bmW8 (J48Г8+* @6⡤?װQCKIs4J VΞGLg@loT(kB@'8)0I{gq_ IO3RQI󥵜y1s-qjFM(ޙ ǩZ)%tyd=u|;k%<' #[),1,R.CbQo+)1$T}\)Slj'jMI|>fF<3\ 9*"(XIODZZI$0~ f\pB=5Rq#Q?+(UpNYP4@(Je97$v$!S7{T!VPX0*ES$r5 Ybj{$N)"qDxP p[0`tUPqٲ.&ZDc&LUKC.ȣ!ӯ<%vwwB?>6՘d;xڤQlJ "MZZ[2k˱gKpӛsFI3s@, !Z +iā*R_ҼP}9h0džV)G.f^*lcD44&o\P*e^Pn35b 1OP bb$\<ϻ^[6C պu`TJɓ  Dѧ.,T[)(l ?34wR]koH+D #|Mr.f쇋lڲ%r<~zؔLՒHIIt<:!+LV_ {Cnoգ,h.Ֆ 2-8I uHRҠ8bYk{[-6/7AYQQʃn#Gv (Rta-_<u#زw Ljlt9ybTHtYG lBR=(ŔޓOH*iZJL mR(g0V)ܐj@tݫ¤=v2FO*.~%xOCk :J Tc٥].ќ[#BfzK -X[sU!vNK2ciȉQn󦷳gF3Mxv⧑9Νҹ`A"Wj4+GkRh*SX:ymD e!nȿ#l7d3%R !X>+*KzTc3%>KÓ. 0MsL3l,{1w: jlF'JҮq|OHPf&̛&JPP&hJ:r{Jo诔Nbs{;1h [kjlᏟI>DavW׀X{k%~|vJ|l-R-G( uhL{@l16G4@2r`({dZӐP.`09N@ӫ>ҍ aY ۑ/ʸl' \8[9N<_ 6@vNt8='DAy2aZr&qJkBȋc K7/1ɔG,F{n/-u?YV!ϐ d&}@o> ppf G~$ ;7b#׏SUXKNM3D93Q0_&Kz?scD䧋x<Vt.~sh:\d QRn6sNd9I|?J)ARyAOPP%LC B4TZ{ҐqTj2adžț6Rf L#f HEJCH][MI&RDH]~;C7u?VsR>tO(_cHn;ZVeP ee{_\qKzd{2&8u1Y<_c2}e;}>Vϝqn&ʙ#/)QL gU4*}]r8? *܈Ld^Ow@ XoV` "y\x)8,]Muyx IJ qy:""xyN˰J|̄xgK;if}Yx?(%,AVrVhᛶC򆒕,Em)t[in_/O ]c|EG >~ (185N7Lu1.i4#Ӧ,mX5?hJA>t2J RNzu7^bP 8_iC^׊oktonFL4i77Sto|,lUK.3>uzyNձ{tpPxj 7̛]o!hG2}pHjh8R$J'cTcPr$ Dad MMBث`(.A[uS[(Dzw*!@B[h[uX01v$F=lLGP q*&#{?/GEmr79w7".Ru|rbT,Tl'Kx}b iRrGʹQ⾃藺w  5l©`ԭ-6 /'y㟜g),CPF͒*]_]5Ҵ|g~x5߰ūͯm_ۼ\x٭w!ǥD; :1}\|a>hh%j ۵@ow~h @o$Nl@ dǀSA,$W]4 78;JIu;c[]βmAфw<%C?m?~l/=-=-MV22IROtys}w?WګpKˋͦxBG̖A֢i۫=a~ܺO8͞YR~Ϻ_~ʤj_CiF5e|cw%6KDv!3ctʤXcc­9u8˘V**5)'YH(| iαeyTlqxЧ[c^ІFnfg6g/F*[|s#'hvDdVRjH:m --k%V֚RJIP!N,=m m>5T7ھןI&`?Ex&_E͹/_|Q\G;h`n2gwI}2-"ǽ áeD1jC"YQAD o59UZRJxT K+>3lj]@k6Rk SZEgW)ɴbˡv1=!Ӡ]hرT5B%=Yb4I/Rڥ1. "r?Z[VM5~+}\ VBޱI䟲).n@k1+cRF3)%Rj]F,ʲсqV%&iRsaeTJeZBc4JK b-R=[FkF ׵3~{*c3sd~ AD2ȇ4V&^F /o\~ۏ+gW^.Ȟ|1՘IX?H㌻/`$Z4gqGf}QهԔ"!&݇oˁΡ]%-nʳˀ{esB.9;ul3ˋ.[ܲ^͹5*n];}kMo;U2@ΎPN5(LAb@:!ͭ""DGgwwMZ3BҌHB- k4qRh*SX:ymD Z#omLK`:)xPz.Ȕn s8p^Q êS: 8 s@* ʹH M D-UYRA, N%k*Q<'/4Ѱ234KQcH} Y+zhðMa3]эHYK . )RKX(?v 6V\~d4 [򌂔+WjeM[Wku$FܥE )CFb0!iHyncgR!NC[T 2Xr 0o9IOTjӈϱ!J]@mɵ9RJ\*qL'6TcDX*LxSi[c@e56Nƒ[3wcU堧ֿMhEӫA@)QɁ8]݊-[S@4 F1n9άpR{?M:4)ʘ6Y,{+B{S@'ܖk-ЇI}@6O.wKRnvz]!DHȕ须޼`mZ nm@)!&#{aw| T:Sh Zh޵WgޞjLlQ9[ ķB]0_z2ԾM֖Gqiy 4HAߵtws3eUtK $^ X6"N&"vilF%ufuوE^o盠b+`DJַaVtx ֘풗,[| ܎z&@rK/ɲ mM[m17Wmg"PHc!l 5n#1i^p8.tYパƼ!Ļ76Q:#HcnE; V2塥@-fogaO׌Q $P^=WAuU'EhcwڔZ$V˸'ȨR Z/8+(FB[K\c ('Uc+);u=_ [Bz1;ɘ8YzPKe)0Y@" -~IoG9gMt~~zw=79!7N o'9A!9^3a&dkrNG)) Q*xP)&>tI)U\%]#gѕ&K{sD`Ȟs:6~hg@g./19!B{(& `VF>$ XXEiC(|)Pjv ɲpzYnZU5WH ]1[~FhRøOoPw^[,Unl`S}I*LL9r8Q.BIx_~GqE(nO^?*ҿ̿hD *Նp%ߜQ/?GOx4쿿_B+1s0 gXoQ/v7A_%Ua q`He֟N6,ΗKM\&0Y $;ADfH7#ӀLA t΄+>z{EdYԵV&U&kN3U4f (_Iʣ/6I[MZ@/xհ`׊X36͛n5w;%,LJ\x61( {S^+M;/ v^U}62Q5b]{D`]w0feZoX! A;H[V\ڼɔ޵R潾ͼWXw{05>j1b9%v>hbPCn|hѰl㕢<v™4}$/g Urzz9w_94e]a(nGDh==kzrU=SJdw( x%`)&-tPN4k=ގ"}dW꣌IG5rc?+G)#vj֌R~rb?׳6I㔫޼dЌП; ƅ4I/`s`eD]=F+^`i(XvNCRTM(@u2Oۯt4=xS}754>F#x8-}PJ1ԷR C}kC] \Ry5$ю{h4Kѥ38msvhn~9x |5E|0ףR*lK藉\G@XK|4(MQɔT>:bk?:ڝ_תqKhl"@ӷ'zVuh. jY6B^l z&0|dnl4 gg~_K4{.Z.WY9-@݀'<QO~E2r!))4bMo~ ƾ\9<AeR|3H'@z<5^x$Sk,H*S,&#K4d4Vm e86bf<4)$e$q %֮yl}:wߴ2eKhl 0kI|50oWtgT-]B ̘RI5 I*o؟ VopU8Kh|8qhG fQ ED+EH{iP *Cvq~.J&@d@Qf3j!8dS} LߠM(W x馉B%H s$Ls gqAC(1)}YKMnǫfe f.B"%"gRb )Kh Zoϳf)v:QxVqWO0-x&xЌ0-ѦUz/cac <<9\˟yZCK9w8:zF1 l͇QZ:Mr9 :hMc%ltEwB:gUѢ_>8u~؍~ X_YOM 5}Эs BqzAcP/=J} \LW-]Bc6*e-{܉ps5>+ZV, TPwFݦ%46{T*i  S&:P:51b$ cw$^@Bb'VBhW m~n<ʪ(djQ7Qx\n ,:kWbfI/PL UXS(F1P! ao!;w00 r3>0c-!qj穷5o؄s+*\Ԡ${X.pYpj}C{G+t'`&ޘ&G1`9i sq; ,#Ul,cX“4G l#VCČ}c䂃XL>$?#.t/-HҘt1Gyl!wI.z YnEߖg0Ly `zJ,f+Z=[TX9Ph %y2F ?O)W-ėAp/X@SO نV+_+LI2tQq*N;`iʹfb~#)IHtGҜX\)ϧ o3:`$Qa`.%#)KR:RM3YșB,=,RRKL(X. =ŧ!HU^V D7ݞw!#PY؝Jgf5dQ lhz<圸\ JJ2SƂPL8.f ѱ)x`y[Զxl f_ gKj`JE,VF$Z)т{ZvAs/m1dd I+bБLLPz8U1p8}=xrڲYs%EgYQ$cѨ#E4s;^ #xv{NKqAobdr}qgxڋ>cۄ+k ,W>Wbb BA5CKΙA37M XnQ-ylQ 9r=9<ׁ{ U8@”.[8Xɡ!&6'YNy׊1,gh~ӟvF_v.x7N'/;m.qTT)Yq19IYL:fR1:暣vq~t^X| *Ô FƓYwNy׽|cWpˋ4:_N9kg29eT}gOxϣxGp>y+=Ma🂳_|9.4A2g/m+ye$~-z̕e]K][أ ލd^##/Qw2y+jo9(uhUF@屿/g#1@TEtWqt$.P蝢Ie-pB;)@Sp̚):gm^Ru8|YXRHe%v1r?-E텍C"%qxR ܷrr8E6Y:kcc| tfӵUCN꬇:TCgqwAv]嬂;DɼS$sB1nqE!UЄt:a16uH|`Փ8xe ؕxZC.=,%'pmfdtI)- ' #&)E0%Q4TI40YĨ(jKøQ&#RISsAGD+=#6\Zn8>Zb٥ Ɏ䜂ʸ(_N[t76XɻjU@kF&I:GkLΖ 5Bt9cP˄!Ȩbf9xAi<]`efPet16E}kXRƵ53{ǭ/ /*O9O[Sna/ȱ$_$_ FRK#LYjDZ% M e $U)=;z,ϫ6ދnG|1`6+5 `ߕfbqeGac>L$jzzܕiT1{G(h9G(9O>tdKnێ@%sΒ*TqǯQs3AAj%b)>[48KIC31Y^~q]\/oI- ū?rKwc.6>QO鰛ʿA =?~<>VAKv/y=7+`y"OOu|V;杢M7֒f{s3/)L0NСd<5LȻmw˧Ozo?-4? (n#T~6ة`!̷6x ߇m0ٓ6hs7rȓ ,`U#kZ% 2#a<\?pA' T!G-g.1qjc[HW윘0"Af0c;D_k$YXe4hS3 ڀ <pZ Y/#L˽/kJ?Z}t3:{AKG3G}dDS;nj1(3:сn#g)}2AF%6>5},n Z)̦]PDC+-|.*on6fnLg]8\]mHy<~;} .*r6!%8A}8q& i`9߹d;tc.[zYe];ה !1 ~?9 >a`eGc& Ȍ}d ?}N~.f}>l`E- z!S?>})KQefᖲL;IsŰc dL|>P#`OscӺ,[jӁjF2$jŷ! 3t;P:PbWJ-/q]kwtVs嬿2/;&8;VK.}TXqN+1-rf_Bd!g&0Qv;#ѳQ"f^C/9Z B G[m5-+.B$ʠz۲ 0G_-;_s/2k?6kcʏz>;iR/u%\_6=1OA Ÿ{`kJ?SY|pBr+<!S }X1. RXpiVzP`E `Z7Ck*Rb!p-.k>2) $]ἵIBf287{sonU^WFU":AU.'=pVU0<SɴJ]fZ-V9\l/y@U_%iWTTi}k`9wWʷ ƹҋcajGi͜вMn-֪9"ڔXMTZ$I%w dڈ+ŅEwƻ X6}0SpA6A?1Қ ;ZCk#b d,AgjAu |lOEo-Vp{p'gNCHSh(wmϷQy=''VZ=>9d}&6֘^L9 E/'/_؇ [BTR\W>b0ê;!!wz.2K.+TK Rz&-ƱOwaR[H%e-i%ҳWFTHaϽd;F>). g U7W0ǐB )N߶XC9s젊\kshV> 8Q,kllp}GǣcOnhk5N0 DF"l!9o:SXQd{kb܆@*ʜTbTe+QZ=/s&g\rVBoGsFn_PsZ@*y*Т2a6*Zl6{d*OU '+ ZϴZ`d'>TIK†l¨\cy&#O}6.JF[}L;h\#v%^-G4.ZT昭Cr}d!5>`q*ms` >^6t'gAGVHAjwĽ`w9yC|s*םZC蝙rUN:77G5]ƎߵÝ՜Vuz ">p="}|w:%wۺVSkt@0ldTJj+Y`s3GTpJ0.jEB'nQi %TOr*SI⢘޻3@?*;J +!/6+d%{t-٣k0c^1W73& .?xwקݛ>MZA#wg&!fRgmv3]ꟳwͱ_1q.ΪZ▽kC+՘{_ڥ察&]W!2?k+^ e}6[=lO/o?5_Ǻy`[^7#6J &զ%޺`q6D 3. QY٫ ɇ,&qvYD;&ZH({R-^f] ūI,S_?{k-.j"tt$o&h0 Q&}}_>Ll40$GR;l$⋚irfc&`$rH#DlEM2V[ACҪ^ͷOnB naH|4.P)9_+Ä/|M^VQZUQZnXEmu&Q;`rz L=ɟOt~nW/7@M7ݷMuu;Gn%4*(ӽ?c쎱M1tUl 7qg>LfIBpšPn[[cGOO-xf54zCiho7׎{u][g?s[[UH t3͵fߘ|p?w$TK& h~`O? 𬂹EWZuI| us3QyC>Xsp4?^]nUx ~>۠oe-/F"–F-9eŇKLrcwϫW_A4[#.aWޟm*݆OOrjX?x;nj Dݗ7xJBh `e=\%믬’QdNQr~BTGcԖ,:A}U}~招F‡iϛ:[%E'JIsNlpSw` ;B}>zw&ΝMPM.=/)*37yLBa=>xPSwryйƝސ?YCMK;]I`BVruN/+|$NO9? !zY䍾ܛoh {_i?0)DKAtل|$k>|$1"1HxQTp~{Xިn9:Ğ]&>PjK 9jZJ2([#M}Ѥ|/ߟ>tw \~k@GTJASih_],'ϖyTCTiS);ɟkyP>~IWHjEh6g$]y2I1#6C&]3)9-ynӫn:]S))wq嘧uNі!kG8L3]Qr^ZgSy ib> _& !.bQ  $@KLTsŗVՀDXh2cKqp1gAh 2$bcC.eh7uS&wjr?n9fs񸳜YQq쉡efn3Gy|QA|lfw|I1,('bZ U.˨M1ڰw81<';<뤔khr5!sԽ2'ڟIZwr5K6o^אWcE$s` 2&UAyZQrТ+w-qH{ݡu/s6Ecg[mYԶ*Sʒۖ6%5zGOE6ʄT$ FvWkQOFtnMEU@| s5fm=`28sqQ5φ 8UtG\e&cEr3cD\X3>[[T}F.SGİ^Po㶢oωw?*IqU^' -]R_DW'M5e_h&ׯuή^Vr%W^؎KUF 2W/Ͻ[>-`V_ 9dqDV|YzI.8vB^Q=J>3#Rv](LjEk#JVjGEfۇT )ƾn1tQ*?2̷bzGLC i̷n1(¸q\Zf]5yę;1|{ƊrxKqP`d\xܓ -R y,h;%nBOzrɶe a|\Ah6J? d2j[9}c!IRQխ~selcT6>= RKM8˥A \HB ?bR=bs,PDB*-qCTIϽ+@Jjj ~uuп{QxIjDnsU,cV}u>d89j-On}r3qL o`/6^I#Lθq PRh|qj3N\hqjzf Td+Kn+0sNJlT@PRkK|g$*E$`sW@=b=Xz-%%Kp|D9^0.׽hF/D3z!i/Y.[co(?'CE} Uf@kXVGׁYSTz!ubK1[sjU7++)T 8M]PI)h n*%c}Ȱǻ`^w 7k]n]瀹0[D-[/;-]Q|_>m]\zYS M+AA*eqjUoS|]V\W*HA׭u5f˚N" NYޚڂ]+ȑפȚ٧߀ܤsw:?͊ V=@o$o_s7Ϊ#7?F!L#޶ߦssrŏϟ7ǫL~{77%_ۼ9 TΙxЁ]SCϘp/'g]>mjU=Z7VR"5E.{ H R2Ԙ;Rbs$ڌJn\(?.!ZG>b]LȚD֌&f45&s0]vu 1#+Yޅ-h!hF A3m!8ҥM@:Ňa'O%rhkS^;o UDě`ū4+sW&\qugQ&uvY2dJoY1P5!Y7FkoOj=EfG: 0'Kc`;rne r:usgFJ 2rU+f Qw+S̱ZRTL%I̾ZҨd$%[cg&-X?v9p( lHՇg *X"lUeqKfG^ dPmEAWcQ7d*Nkt8롺貹a/9 g\.#{j~? 9.|~v*8?(uV$5sKF= zj㪞6`q5@M)X"WXdg(XdTvy!eYxRgJ5{$H 黲do.ZXݙGi~`Xc{S3`yKh.CyI]rt@l)+KŮꙧ 7K1#"îtrRmP6>&tD1 cڬ|Ab*ͻgcK,DDZZ7':R)JaEK R$n!/nTTRbUtfz?X* DQ:9"oRrl8"|Z/7r,a\SZ"#-[n\g p*Fd[G:᪮+D{cdɶ0$]P%"cڭv~l&RѼ$Flrg!lKY86ȣ'^O?j-0UPM9x-[d>Ӿ2R=7{]=0٪?ɼ70n{-::+>ø9^:M { gCκي)ޠ= + {YudLjG-dJՁUNV"ŌMw].p@t&Gѷ ߯O?lDx8fYmtꋻ_g֟hlKz14Nz&^#W{Ѓo4R0ctǔ96ZgmIڧe 00|"ȪmXlGՒMp8+yEާxTmN~u^dͧ/{guCd9êC0f35vh:{.Р 2@&Hy&]6\PՃ?bVNzVFjaɪy[ ˆ5܁Ii݉&fPR–8"@%Fڦ^u,(fvI/iQ1>O|3{$.P,qDoŊP}Y9h_j֗+HOSɎՁg_ŴqEg/ݜz(vm#C*}&I6}d;f#4?j{ J-n~}TkܪW 5m!.\@7NhGSJ)bUS(}JK.WDqReM|?PwX}ZCcb<+/>{,H=- C-{Y5@[STVvӎFGLF%㚒qj43#Vzu-9E-;rgDŽas55yԞd=NDOy4Ώ'ܑ۪-v|x^6cA=P&g23n fT K.6(APD9W,F4qVq\a\"1J,>,vҞCT~4Pf]eeL }ԙ-0\$w)bϓ OX調xu:ɚg0ڰXoPcD~ s*Q D{xp̤kVTTy[إje×x~$pХ7?%i~L:DҮ<"Gä=[$w U`~q'*'p&etu65W~ynUdO·R rRh9ƺV ZtrYf,[,IS;K|־\L?ǔ0wϗh8 /) =' #-0 W}F^]|[j.]5G` ˷|{ac_^ƅ7f}v+_Q qvw՜(*VsUsptvGYJ7Q YNt_St1DupJ>aGڤ?ޞ9{%Ts_(]2qvuɇ=~9죩?N/ 7g⫕{/o&ܿyߛ;nVlwmm8} }}ܪ~൓IT_QlY:V|dӕnAp!ftQrȖݛ4S͸KtxL _Ks{;(̇/qgo|s ry%|^nWvU_[rs=!.U!֤ntE,wߺ>{OQ <8u xɳ;Z8N=q"ؒ^?)ʮߝ)"J[﨧[EVttau+q08{8fܟbwޯznYnpS Bdo$6ɽ_$'ϝءD-V[1$&F}Kk|Akiv#[80Z+BC+B* >) b/ܤ<+B]ݢOP(?J+c]ui *>]i5GQ\8Di5ҁ,thYM: $= ޶1@%ߜ>\A`C7^Cɰ?IOA&݊N(GݙTܰ6\[k9䬇Y%E"xB, (LO^*h6EF{C%$xB*ŗ&ՋחW=+q],Q+]f44܍MPQuV .-1&%!JMI%&⊾GT]׏+`J{Dy󋛷)#bE\\Һ =kj ^lD-)Y9o(OWFRuIs:.,8bOl!>vָGn10gaq+ߚ гY\пw{.7gأ4%?]FQ!2Ka)ʣSdU 9kHe  Ԯʨ%/zdRo1d bٍD[B&K)LbR-+,0ztC^RWT:{EV`Yq1]b*~O)((|o;]#n]oKCH""~D$?s⑜0Dv[ Kab*㲣m ^B놋m3bE,mU-ؚQ9BM葬+ўP X &,IzL9(}ms ζ#UDM7Tx*JXAs~Vxsj UYDY9d5VdV 2kvDU³!|DH ?Q"[Bp^BYd(G$b)>79FK D0$hV=%լ[*'˩=7VJM?Qmj rX|K,XtbQxs0QBU Jwx;Pc@svF1J,QkB\lʝ)Zw5Vk+l]"ٹ̨ KxxȐb"sPhz/pIR D]pw\hoQrjW9Âs 8gW!#ǁn">֟R`F~8jy$\TP.Ot#f=)ڈ.շqsF]Ǿ13U zS#-#(t9{AÇ] smkZ$U Uи !wiΓ{xG8m9]Lb};m1H:1Q9Y ɹ(X톒sos@8+4|vs%H)Z+):#Ҫ)XDQW6BPJ$8{iz]t|}YD5Fzuzq~v9K[s3Ut`~ēfnqryם¨ѝg,s$uo~btw~00mSE$=Zio M7K])*Ո| Ez/" 焿Ët#}(7ܟ3@| (nt6M,#ڇ]Ɓ;c:Ut5LT傅.#N-S !!` GUd_81eIm,kӌ%Q5⬯c~NWESKUF:)91=D 2>_XB2T" JZk8j&Bk!!ڎ)}W{<r(/z/VPT p"`KpC9Uo /l[8:9qyđ&Bþ d-x:9W'|n)DWdz'k;R8$ >|}2 JX b@PM5Qqԍ$<-黐x:%@c~$JV[ SMHs~i@ZT huB>G!;bu;EW1 t;HVbǴR%OR &w"j@L7nNS^W˄qnvw ~hZ*xW̡RE^R<="lB'p|aemcB;)Q bi821 +{zns~4[݃B"LC[m1Gޱs@RVd7)8'Gv #]qPkM<zqcEvD-({/]ΈMY&긏r)iI]tSXCɒ-eW$QKE7d):) Y)B9"u_?H 'lUu;WzRr]mp9e I'CG qnPV@8BXK[͆loA SK42_xwߘ?hj]tk!5IP"KHF ]/m)Q\u䨌s=U 54*^$B5agѓJ`](88ZmCOq8ᱹxcy`Bf b&~|UkkiX`hygZh/樓{XQmPj.0:N- \[q+%D͢M ӻ/~")E,jI7_wHjSmԆ#`%)5#+yI+i@`dG mݢdkaլJZBROH??y˫bWweռS߅=tt#kET=GEY:lYЊg9!'5_| Kʀ@_FbZ/ؽ>ɋw ^Kf͠%h>o7WZ.ߵ̎TZ0bFXUd+ʭz'O|=;Ԝnl[r|/_ov$K*C [*?jG2ZZ}j!ͺ}ٻƎW ?sUEj A:}X`I3h[iKw'˒ltd#R0ĭ۩">~dHnt*SA]'bPVjƖXBem:=`c+POnA_s/ lxQkKuz "a1PVԨjYUJ>TMY;Yo m`J;u |C@̊dž-_oRPwσTAPBj[Ec6A >}YZAh0ow 7-{ܵ>A:IO^.G #ؐAؒo}aA=CUwr@:A&!9 Ʊ6oa %^ћ6ѿ56وǎkāZġ%勇N_Wm"Y "6k}?gҌq+d]dDd/\M@`aHE{n"8H8*>hVng*>LvڱagÓ|oI FXMFEg #s{TV5Q~1Dӱ-U q *S' n('|PBT 2|QQD힀@Dɉڽ.Su0bv0*k /U<}+%j]'ycR=}vF2` lLq G'L]jG=!lkŌb98 JDImv EQ=#ׅxbN`]"v:dYQv\]do߹KQ۫xo'ͤoS_^k`in^6:N77EU\~yvTG"Cnz)DQEG*ꎛ]}&1b5guHWP1./WwJl}ɞRUq}y~tYʺ}keWQ6^)tE!x w?ӫ/o(Pr^vyry7^\TBr^҅o;* uד/07SVsV_U7΄`Ⱥ,rUd/g7(kx4-۝^ɖ- %iz=/e ?WϩHkduOE+ Ubbˆү+*4{Q5CҰw_&wh)N4O@ 湗(K.y0 j< jjOJRY! l,I=#0-*UP~PU^Aͪ?\LM:,|aoS7>u͉}+:|qO)?LׁyʚR+ մ2JPdg. W,JgtrWpLx[Ow51)nB1VfLG ʚ9|#ucUW,#=OAa5s%VF.rrDկ߇pueDȖ3^!xɓX^=/NL!xtZ'?`[k蛢}u@muBcuݶG7T,Nhe@$Z:Tw?bgf'P/oWG7wum4C"`dj+Z7#Dѷeb)K7E%LRƥuV|2&e]+x[h->y~m>gs^7iyB rV=WQH= 9ú8!Y@;ŕ>> 'uZѸD4ѐuM+W9 "_csZ$֟R]9Uux /tCun1ukQ:mƻ΂ $a!l' wԺƐ)8J=CD/H'O؋ˤC\pn) tvogwj MՀYGrwLY.Uيg<4Y+id6DPڤedMjxN}3Yso<#9{9G1+ |6@Z}~Yu%x6#X١[qB %yUAvR%=x-,JŰ @AXAi:(jz$ a^x Iཪ:܃hpMɝ9dX3Ub 'L#%-]yZG##HQEbv:K9=wI-!ޱW8cV]FԹ/Pf?~G`[T!Pkv&oeч˝U"Q:`xĊbՍHU*Vp9[L阠X8!~5ߎ=eO qM2t8C !9䑓9R$tƖ2tX"}|[ 0g6/TYiA| !67g˕`WƤNEUHEEEg,F&*9x7N'e Q Jn\Ψ /2Za &GW7h0XWml>v*}U(i˼X2>@^iLi;:TJ;chٶ a$JL{nZ~\-}?uҥZkY0.{'36칃7UtWO|"I3:$* 0gU*LvoHr{ԉoDvq9Еɭhg&wvJ/`c^9Q;&WвO v: :mfT*Ne3{O sΝڜ+QΪ搨b^6TElI91oy0%f#N!61Cf*QXK̮b#٩*(¼`ԝ>>1^R:v:#D h#λ 72λw EV5v3fîÝXRSD힀>LnGZJţwb2̹`x9?QD: %jN)%%p]gD{Dv9c%csAp^=XJuҊgN}ݻ!y:Kk<`hlÈ)ќ}qgX?F㩻,0}'j:uM\_]Bs=7.ˏuٜf_\y[Ax>7s- k.JnBP*TnwB6يgn&6<b4ۼy(>jbVkݦbY9oքQHxCظV/9lVOݷch{Dgz|zIW/m__MkvUS?j&LQ~}R(3U 璥%h:pA PR[My. ؔ _&?*Cތ9?[e{n Ihi(2 J9N/dJ4ya|1*K4@)SфJ =Veeyc u3tVhI10MCRR ͇#UO-Dmm~~VUL" V>*#);~77gͽ̞w1|Eyq3V~b,R&_Jh2ͧ0fF쑾VuSPKvUQ֒E@[CՔB!rw^ۜz1)QLf6d#6*Yr&5k'ufB]ILCf\F?4/SYs69p5۱QK}2n:YĸMM9+L2;) ?˳/iqA-`|HOqvck??7 cwwxxfSκcwk$łBͿuwʜr϶[KvXƫ]SIX꿖󇟧q*>ezg%$o7zoǿ>[?_n6~?JroQ yij\ Bf{GgXszi蓒-qFQg bJԟP]]N"]:|? (聽='?{ƍ K/*CᎆpjMNv7ه) 0K"JS1 )R fHJs-nt;N6(߈\+ʧ7:kMx :uZz qcWx!0ȇμ#8Oק`ft~g_ _3YCxjͧ_jh!2{+et9ݖ4'#vXƝa>+|~|%:`ϐx ,*3Y^:_)lOuTkƶau9NQWbP\V3b Ϡjk$>bȊhM6߼?&/D^0&hx >|6վ=S v2r-%mn GL 9 URPpZk| 1.rTP0@e.R{0[jry]~>;iBQ u,O+bO7Nf\3E@۽j"ٱܫz\Œ:W }evBCϷNF(" ;pw. ڭx< 4ztXXm&nٚݷ9;NV 7dPE 'r/ȢAXc5 >֨c^`o/3B@ץRBD8Ug* Rn3P 4S8J/KT%~O|Z j4UcV9AKX3J%},ٝo$#gqjLc:sVȆ[̷T"B&%hIδGK6#=-mM`#*o gS%+ۻ}s#窟MZO^qFvLܗϟ?'#_²'pcRֲ!NA{$rKɷwp1afYk{47;)ǟja%܆FjٍPe7sKO&h)٢T.+ (b m6BgJG/ T6 Oՠ+ +q>6񺖨;Khid>@B"c&K,gSzh!BBmciMb=d@FIeu8/ŞΪt)::39U/QwlKlطk®M;H+tIɁBwcFW10כ@:'ѩKt!.B7:sN *ҚSSEȤͅ jC{U$=!_K2?S d #XN5ͼ`,":du[^EWNe "Lk7s}43Una}/ 2fO寒O@{?[j1MOc゙?qƮxI2Q"۳^`˪^g`n'w3|yj~J!o߹'w)\ a >BWZ1 C;оʉx !&hw_m0]T-Hhhozf' Rfѯ=c_!_R.dHTBr?;%A(IgƗ׹dHh{9'͂ڡGz?C';o)Oeo9]䗃;{3X2ƥ[!v-$eE \Kqs"/mk"#m()!;ڑGvٽu*b&o: : znk=j(T{\F'7qKM)PԙIphkҖ,:/uTV$hVo] X*sOʭ93եō }rm‹Ք}]ـ~o7tOض<۝y1M-hx~%̓pKYqIc$~q"yUF7z^.Ow6!⦥/<S [%T \&X9G Q.Z$upr*ʞ)И7"sŖx70GRv?_.bі1Q&QfHRTiKazl'c/Ga5ᄡZLɿ~FO{BSUj#tbA rtv<%Hd%fXvmna&fX#9mRW_ͰqV06 W{vY!icfp]T-nhʀ:< 糱F&yK!o7BHdB"C"H-p6\ =hTHϡ,HɝdxjK4RrKԁSo/?2RnrƔ2^i+ ;%\IqѢ08(K@т>q l5ukQə In+w>.\~Q\7W1PL 1A;cP-K \IJx!]P6 A#{.Hkѡ]QpGxTw[ 2KMByB\ iBR;ah$ 2+:.t-ͭL{L =par j9 $U*\FQ{6IkoOkkqky%93p)Q9|$b^AV.+\ q 0^R$J$§Sص=;LT{);.݀%k'[=󌹢w EGp\A-XDk!'.wSqM88%)"HX?lbeKS*^bBbR+Ͻ˹RJ&#PH#q/1 3Dqf"QpbP&P):L=NPqT'0* V%pHA9weQ8;p !frTFf*Ql"k0-D>6,7U e4T]>_V h"z)O3٨nhM:k zÂ_٥<ˈBgy(j5ӝxs# 6Fl,YX,S͌'^Y$ 7Kj r')M,hbIFS#9NFS+$V +'s*gO\P?eZy gKJo~ BqA^rFK&RDBrU`Jm:-huCp\+MQ&IDx$|HqY2)؎ \{V{!/$e {>\"Nn t= `8z(eZ0 R5 : hsk/B1Nfc#+Wy`Twmw<}<^<75|Oŷ=?F.E&N},AKg_>]ƫ(qĐ+^+̰%a:+U__?7'wᦴS6 ȳ)qpx>U'>j0}7LXo蘭\[}jƷߢƷ܂n9ݗ!si+AV]}OoA@~ˉ齯8G}4ov7qou6}S &0k)`x=Op/DIBQmK>`Bi^4Hi96*ݯk\'IJgZmF}+xx)g+ǿ:̖@A'Z*LQ561nm<{3hU2ɣry8"{,^]F\$ +?pt$0s8_&- +[K-Gl= ~ځnj*Ʌ%FDVudA M%.(&I"bl,fqB=ɛYBS Ȯm*1$']zQߗ2Qgj@OU lRZ-QTX0.Hw4(ya@~V~k[ ܒ?[X/>t2U ?=bm xf {on!ODwLd"_2J[Ӎ9~x\jDMLw &ϕt÷ۆoY >(n÷7K)\|EX.srr]+H_N zFjxqJHO>4aZ7L{#7}Vx,'P5̕rt;*9K/;8{mF ө?gfm յS`UTxUQ"K{mSfB(G9W9m53@#=>ͿN1nx;}ga(J{D&FQt9gWO'CMu֠y7A/|r|ٽf_ab]O+]:@pSM_:._~nt3 {۾k=tǡ?N9G8&z}sjv[鯗@W.7{oйΘg}?a,X?{5:uc@G2HAU4xTW:UKϧ} :nx$Tu޺4L^/C_ڭ}kz`ktx(>?0_x={=ul[}ߏa6q-P3ǥ]O5hL)/O:hO&$a*,^X/gX=] _tpqv|g?ބ-y>2'3 #83 $g4 atW>#gno#2DQT"#fbYT$触WA3nSdf(Ƴ>a %LY.gt |) |;0p~lm'd ΐ{wwGK"-yw":cӵ37`xJlf&48["ݻMHcA*߄]VZH89(v}m]Xvkkk~]O+ :kzFcm"R\Toˮ0Zj~-yd~YƆĆH86 "vޱm E剷l?L%O23TH)Ep>/E{!ĉ,?.﵁TI㼻İ?OnLg:\YT-\\-sc]\gy^:N[W]+ȝ$SëU>*bT.cDaR B-S.Nnv{ +@mH$K;DuKX`l. +C}Hhu: Iz +KigΘ'.{ETRb|5guӹų]\[Z=V]V](ߣEVOH㧅5ܞ+Yš_u%h;$/ɋ5/KrwF$FAUԹ47k;UMC뽉^}*." x; ai{cyM]Is+!6!1n!ҮfC.!`[1]RoHTn3+ҐyBFYyتkIWX" "#H j9y[ ܉y;q٠m[h)w|rڻ+wcJPoI}7؁f}Ym``qfpѻ6d7R5 _6{K2&\p+orV=juא踑=זôвT3iy6SEP,A+4_Uq$ IVAN@*A8WQ@@\}ʽ8y¦O>M}J)tHiw@n7^o7՛n[!ЯY՚=}'hL#J1NX#l#ξ=}`)$GwyEإϔ;]]\k/4ڻl:jL#sonwWw!8gTQA>0Y뽇Vw -)-)uzswz6/Z>lRS"1`)B `\M11-KbmEj)G/% Oa H~>0M`@\2Ca0#+&L[RIS(UH;RhpeBbBW S#fi߮"rbdJqK-1$1 EA %#҄% k$zm`hJ*=)PoV}(xnV E՛XCx\bP[XnglPn KL(Tā0Hz{`Zml 1M&\ ǂ~L _\>|vj}+&VeqZBWQZU` :0t0 L!$8ws _50$+wi^̝ŋd!62Rh)q25Tڔ.Io(ⰖtE>3$m-V* CB IHǹO-й֝2$ UN=VY_w>cSij~ٚ<&RtkSۻ{#2o{Y"`*@#"l?TE)Jm"IʤE+:Ӑ*.;HT#qp)`B0L jЉ6ؘ˒4Ӑ3@þ_} ɢQ7mC=ͥ{ߌo!eb ߁&ݘKY"k:n޸eF)%>xtmhobtB p{98|\Tk $Mh&O[M7ii0lD;&I"bl,f1(",IIqbA#4ՄJP 6V I),ܤnҊ q3_9~ H" EHLc*̜ r-< r>u>qjyE, "S ;w!f+;~I./dF9ӛVgCu4#""D9EH_>;8كh,(|v3آf?O GL&}5vાZB1`qxQ32xk,sD*!hE<5YZ=x,5M.x@7G:^5Q=s1l?N‰e Q~O{e|O΃gTgDk;~t z=$Zmz:r FB;I"n ̸.޺c6glRy@ȶ%]oFק|4$(;oǑ՚MGxY*rɊng.>0,X*Z2jfKmg@j0zu>iN%s:O݊O'>Rɻ P*|SX|'xgyܖ Z UJqJB`l NCTJrwq`E i黚ml/lL/j6^P)]e6ZyQxgctr/ I-[KЋ*|tbׅ8u1z]D ^&TT*w}B!S>OmD-׷'_"r-[%/~!% 9vR^Q$ t?}3 IA}}i64-Z 8`=&oS`cN?/ݽx mˏWSKVw7~pB0e(o QJikѭ~VV>vkTB!;`Wx_\>`81x/,&#A=gȆ}LJw.CPfZΐM'wTZ/4g'DzΛn;Lntp$q7jŐ.W+gRhÑ]%Coͤ4cNv )'F=xp(%vQ7yi6yxc} fj(uS[QYd=Ξ8jv60>ʒj)Łb^Pѳp-_Y-jysVy+ `<a9:ҌFÔ~}WF Mdl"E (=֖&jq,qUj,N*(S*EBW/$/2#q ,Tΐ^>ܒ!WqMxքwaMxքw5nxVAԦօnQW`+㕰W*_M*T\܏DHs3w(b߈Y ,MFȢeNSKo BmH7cزP%j}&p>!`4Jj%s/Jh4 ,:tiI[jA*yQZU($AU˂d%3'? ;L 5z j%9hu `CW.//Ix~3r ]6~i䖤~TD1ܦh\汽"aҰlGkrlF=ҹh\ΕܮA)BAPÔId8勨@[UBxg5HɁusk24蔫?AkUA0k WF%0Bw}E&0vm띂9ő!P<%,/d:qe;s4~Pb8tEcCɗ/LdEh*nmq4??*UNSq;t#]=şhm}_!~}\dδ/dp^|_msOd܄~bKsM]l8ot&xg I9 Q Y˃kCuL1h g:\[r@KV B;+MEu KkMueN1}u! ?5+BAS8W@{tr~^\݄_}|n-+]p‹ }J-ݭ7P[Ÿ=g 6B%“ kXoj-XJ+TkYkxI*2+ڧ*)kPF` _2';TVI]=M3]JvL].!gMͼ`I\ui P,,nP"C%PN C0ʴvSx{HE!4hE:XoQ (p5dB߆9'&dlt2hQp#VP}_m|Ч w?|_#qwwt9Ә?ԁTС 6^ b^>Џ],%PQaJ5pa&#X E&rYܮ6 ˢU+r RbJ[rƗX_x<ނyJf(2%HŨikhΛWjt^:.Kؑ/3g̈F8h-XVt;/'J bJŮ2rr-Lv=hb6 _WPJSՖVX 'uU)-74A(«K/n"J@UK  cϝ*(8M>׆?u-A ىeua\WTKKЪ*fICUBseOc;Jr^eUdjHuj}7?V L8ϑA\ߎPT>h:rz0Ekn $RUkSpsu[c?Om-4 $0V`c*i+ N8,N<3PuA/ؖKgKu-SP.KY{͎v9i CH FcNy>USoAСxs0Wo`"14k΍`l\qVƵlc$Z +O`țҡ^]A^^4 >3 =&SIwCa%'G`K'*&A$:F9(Y0e=ymo8BϫMf*.ST]gH %14$.: su2ym-2f^5xE{{[5ѩj_j`ƒ!5SAӾ~C4F6}Ln`z[*bW$$스+'_*N(]=RXjklM{TiTޫsk=%_L04rEaIFbׁXLSB$Ib̑Sh$;]9|`C /daqMQ^%UѸ[|$}|,Ty\s߼.JJ[U-RAU2%!ò+WPԐcLG[y%YQqU]-2nSdƼF%JW׺օy sԄ񆾞痑5,PauHUUUɐ (4J!7 CJlDeNӣ,hḽ)-M $aҋC=i$QqE~l8kCKbشt{@H!W2_9woBcl w+OlZoؾ /_Czw 贑K2L){h2q}q=ЭOW UnHhMYV7 W\ bL') }_@c[ݲ]ưD3l*.>>9Z enDmHNsNw4wB>slSܨҾw+:9}}]z>]|vŦiDZ6yWӗh$Br<|&טxGec4OH$h1^҂/Td>.l*dFgK.~r֥C _n˟1++q$;Y˷05V4 JiBrq3$qTY{jeuk+3DV5 r_֕-YPHpSۿg 7U:=CFʜ{oO-uK3\н]2)]]+$ Eo]m%i@#UR֣J3СFuj]Ҡ?Rѡƞz*rbI2\J4cGUɸ`tYd[XlNh&8iGpt E9N[KdL„(W\RP|{Pb~AJF/nۣ`Ƣ!o& &e*-ݦOḟ]2 .c/.T4|VdKg4 WªerNT֫ʒѵ"G_ cٵy+4?J=,uH~~&q ~Nڳ'!Jgm}jJ1H?;Wý4^O~I ҴV.M[Uz~JƨxǛѼۧgWa0F=L(&иWͮ?ܣ*Tُ 2S4N麧Q&QXؤg&+RlڹӸiqx(wڃI7#? X=:}!Eo=+OykfEc+"FV94FWtΧ@W"Tkr\ۯ,WER`DQRJ8-*Ԣ4P%$ c]* _%MFVu\vQkV/&Acc>,dJ ^qW 1xż+^xmuo>7}sd0s`vn7H` F\J@%i2%MiO3/*?|3?yQ٦@+,q DW?k@x.fB>c,Kf|Q10:,] KoZJڌ_^2W/h!a9%Q]dj^Q]o%zv=n|v7 dϘX&Gl=7<<_xTk>itOGV}zIڢV.mn%%7MOQ6 SRUу+hPA3] .OϮaUwY+iv>==hTr99$f8~%M¸qDɔi i:I FӐmg=sO?txRDwRHT!uR}٘n:p4Whv:(?#h=7SG;B(aYܞ2&s\h"L̰0֪0+dL5ɩEyךJ;ȣ$$q^RzjvQKf ^8|u[xx% ;j!kiY!𨠂,SLͤ2Is>  Co+W42Fx@X% e!c[_rDxYB N`߾7PB4Jσ,M'H`h׳æ);E]Ծ%("0Z7*tFPZLY;bܝ Ѻ8IG֨%> 眞aFC|mIsamD|][o#7+Ƽ` ̜p6Hy9l%[lRKjYfF Ub ]ێ*Ffx ?3bn_dˇu|x.^W/n>P,独 .2!=EZ0+ޛ ,) 85 j>R 㛫lfR?\{KzZ[0 ܖZc^>&sW~ZpHƛ/:{3e =)Ttޕ$&m2|sbTfr4seAẠ@-+4S^W”uPNnn&MDbIV毅Bt5e<2gW(c.'Vf` y.V0ŌU[j` &$  nYMKŋr:Oq`\VX ;1WQΐHʁ꿿hYk ګ1T41sxb: K`3pz3N_kXBM tx]+(,FFvp`V) Y)yëKv5nEbJ4$8*n^dѐ&?l9~iSe:$J+I9!8NZdơ<1yr"BM0VXXȞU`a2RB;DA Yn8p>_[X-Jadknuc7כxwzS27hC(c8|gpLTRXH%qnC ,Ačt h.k4BrL%ќ[=gzC.y''qjzx9׽uNʔҕK) rI1NeA3ϐ#6@KPsyaim2';¥BTg!VoR[`"Q -{R=㊢`DOfv"FdKp8uAr8#F‰{ø@.p,)i5N krPD{FD Z|G'ۡźݩR"ER)\o\DuHpmss V;zԋqO~x[^]h*Ƿ} ??cw` ¯>s0Gv!W ߼Y>6ǧ[~Bߔ揫,wtχ -ƅoJ!Z2+B߬ IhouuG::|TG&ēj{YE* ćIuWZ ٭0ky$)$DͺBk6Xگ],#y\~`^u`^u5YUnCDrRyP1#ơ`DjFR) Il|yNyt;uPҭf,&_~ ?\,Jj jpOΝ<MHUE(9Az5TR8 :/k 0)f(ցgk̉K [ P]g_sҰ׻)֤ FKEP5˙Ƚ|룚G_?ggo޾e1ê/DCD'BJ ѡ(9/x(ǎ5ǩǒo1???L*3p*! NUjncE۞9bٜTњ 9PӇ|> FDq{B ,C2ǕCfyy[#@Yϩ1$=~ԥ*0*`T2\';_e ļɽ(~|Lb)pp/S;Ԥ1dHM+=k(S&tX?6_Wǚ4x>5Qw'Z9_Z>݅f B#ۇ܆J.Qwës% G!qbI\ ~a櫼Ep4֔Qn74fIc񐶺L_{5i $|7ЬGf1nR?ٞzĦ1uYŦ1uçS~%-Ý'|1}q6/eU0 V9C!3+sHs;()F6MSXdz~+ E& .X[S2VV=!Lu u݄^{Ͱ|%H[^ @Tb8z_2 ݾJ?]ęO[O%8\p/d v__Kh嫵B_.c- FZ#>H|CބGOxUo~~o2{ |k6 3Xd !:؜Yn>7ل–8ޅՏXũɬ85Wvn%g$>0c +Gc%&,ڕ69W&0هL%ٷW ܣVZn|Jk^{S]G}.1R{f}hG{ayW$z-˾xR{yIe!41;4L ԚKGua*FsrœpT͘wcT~?guEBRfM*[K(f?Ku_q+? Kk~3W'ӇGRz +k%L^P! #H\ 1_8 v@Zh>2`/)DC.ecShl>%=L3T/) [D%jt0#5YN‚g(pa٫'N;5NI/U-Kܢ5W/eho\2^=KҞj2ٵb# аNat9f8=]Nm.qz݁PR@gOw`Ujd,-FNc<$F&A5 XG+|@|wZJڬc}}lncm+'sUwOlQL&5Sɨ_6RppW++_G~]]GT20;P_'<|np'0*%un]7ct mʜҾ 1Fh lZ`(7HxIikYP@5 AuC@o Q#2(rhEcޫ7? o6c-dzQ("O͂>~(fc,WB 8lq|,zx}3c~򆁟<N/گӋӃr4::h( )ӋP~Lwz MEhŠKCD6XdƙL~{VBA96^+<|j'2 ' lR/r^8rgmʍYRm(3h?kh{4Iƭ0n<'@V#d' ÔsѾ00 \r1nbn*D?+^-F2u{ ͍4/:,!!$FYp(2jٝL[t-(^2af Z;t ŤTP\N2E8 f\ctvF3*Α5cZ(ǔjFB .NEHRY~ %{ VL\2)@2C 3˜(Ӗ;]||j{b&8=C :FQpx@-'g 1Ds)sI"޿OD/ kXaYkk"S*wh K x=>0;{}JŠp/ݏ?tc>>oʽH$@RɣN.@+A20 ǐIXWVۘJ$|BҞH8]q=ӳ'+X*'7cJqNVJʫꫣRvK\|PB4I$o/UT(g5@)+p%8ѱEIflXU <)Z}wlO %1}>Br.#=B}_>Dzz_.&e' xY3r޵!9h0y`donY"8믔BE |?жo F®{ L]rS(*(P`p0 l4jRS-4OY&#s>̬w3JUˬ>&P OODŒ76lj0'F%9La;QFVʱe|89,WIK^FS|u!R)k,:bh2پg,? -s`!ñ|b1XʳU-*C}z݁zyŢtvpӹ.{zy뗺N/)uZRȦ541@ReW ҡ ~'N+Վ_lNw週+ kGq4^sɕR/L $G,OKXfs.8bX5m췉swGuK0d֚W!6_\/7%2MP%%,K%`C(fcbb-cJْ4Pc`h뵧,S3X6UmKJi` fdڪjeuHD,'!`D-cHcI)kIeA}ȔU_izh˫/ApbA[%gyw4R"}k_(:[(b1q% H$Ah cܑcKx4a$‘ QdY)} T=c\">- r(Blsu|}yYa#a-1݌&"}1{&}:#  YyDa~Eߑr3Π7ݥɥݲ?saigZRɔ%9 &P!3`ɇ,6-\,lE#:#p5ILӂ^D,q4ю#9̉<7Zv|Bmğ\? xH*x}H//"'C=,ixri#-nԧds5b; L%tȫ]}3hkn7~_2dXk#|7Hi 1\_r$?Fo0[.VCϑb-\"&] 9% &xq3Gf0TI\o1Y#@sŏ}2ەW<`xtc9i+B-VTTDb@EDUPYjMŒ ] k3g)m3ߦ&lVѴm8 WX K뭗DvoVNk{Ex]pl'GVxcD9n\ m" d$6+%Y~\i@)i˺儋bq!U J Egw0,mN9Ɣ6yRT^Q]{EuuVQeK8C@ s)(a$Bq V'2J0pD2б*ܥMBoT$?GFo*@BBX_hʺ">=_ħ|^ϖʺYdhG($W7 U&X;c"@6T%jn!J_CC1`}gd#U>R@t]:;7 Y\5VE%;GTBr@xS'/Qi .ߌ9?\&Ɵh~|x⵸cBfhǧAϩJgzy|br4s0˻..z}.:E_.;!!: t͢iB |0~s`;w%\AH+1K>nH! O a6S.'vʟBk^ڇ4'mLI"xvb$a5&Q\`Zbeᤐ:픇()tOL{R<#yXxX U]{<#5 p0l):!eLDށ7`ՁmƄ& H.]XXHZ^~e3߳g,JLfrkPZ3#sFKC)8`Sh(d؈9JYG7 QjWϞV.kRJ,'JIDJI: J2H5;#vS%q*Hu32_X3l)# c*RءdP|}BXN(݊?#T\eLMN&S )OF nKIvJleo!~h0Oz0Gc.\zÖCF()ū9E/~ 2[O +ׁ HN9F-1"(&snQTPFI}>M"N\',F$FAP.L8R1(\ 9$Ή~y(n][ύx0Mwp3L c ,\DSL2lUR#I,Lc,"QDerh⇉;E`1QqDX CM"'Hoߗ a'Q߸2:jh j~AZ$@ |DG1a܊R'MR"bɁ `T+Z8۾ݼwӄ8NAP &&<Ҋ% qId#SzL9lI(L L`C;6-@dB@ȠHJQ:^°n,Nz 3%.< <(98-@1$֝ 2QlJY]|j·G4=mPW5l5+U66LAEԍ!,8:B%Zc+]SVYQJ+52[37ȄTI=U||KZ_޳ɾӷϞ%;#lnP!" 2nz>Ϲ̉}zZÀGu JG3Ը8f[RD5C@h}H0ƶC@Y9]3^<`Φ%Bbj@fJy׊@nnJCr٣{hõe,1' pYY6^ EJ6v3\rٹVle&UurPUh[+d Kj-lUjJˮNq=T\m.8Dnj !0h^h4*%$K=D_}!2Q靍ڤE ,Q)I"$_%PFN&Yz] t+9f`mX­(Ê$e/#" ._4Fao_2%]JF x*7?HM Ցb~̰3;T|ũ"x}ؑg?@DOJ#ߋ{73)VIkiFkbY3ӿO'oFO?%nF7۱=w]I/'}+o~q)0O-L[' оSҟ޻t)ӏkwt m%_mٓAV/vw_CmыS5{sK:.:D G4TTjSX+}9Op;a%G @E_ [w&,K*Y?^AfߓUlmgߨ.L.r!Jg;S0޳T0ܙ\E佛 ܴ|u;fށaɐgoF@;ܛUeJg>"쩴Wހ;By!]z޷MgnB4O3Jęn^|c~1S(O4#B$)EL3L{LFqkzq#Fёd`ˆG ,1=kFr 8XjX`p@l'0ۻ܇ Ye5N}O55$=+QzwUuu R dRq~o&~b>^D;T;#BhՐIP]r{]I)g[YQьRRCig HtxTyQZ9lM[:X޲uoy)וQGKZ0GKZ>~:WԲr6([Y/+~WK^'Tqb9OgwZd3,֭~٢}KoB+CQ¬:fwO(>/˅ZSrjxVK= L껋Gx^E;kO2Z“Aor Wڡϛ`xHcY'uȓk@0i-y\^?݈DQxkļAYjiPrS&NE y`廟7J+샧|͊|W_x_%ݐo I>љo7ġDdCigի#Nh}m-}pLǃV@T0ZvfC_ LWDRb&Q>d1g- /=氨NVu!.Q# ie62GB`uӢ%xsAx$.ǖ *K,vLI.|G G |_uUˡu ˡVTւ:8q1 JWz{Ӝ%:Tj9V'@s#SC@Ad0}@AtY*&btIV|*GI5J^Ogh/ L@xYMMXٹFOufm&QVFݮ6{65p2u5΋|q3$S )JmFdC.N?f6'wQ0ғ [Vf$+ҕ[5C+ZPNuGĈ[0֭ E4Hi\)u 脮QǺ]/J[0֭ E4H78nWn45XOIw[0֭ E4Hu@z ->Fvż5ά[0֭ E4HCFMn4JQTz[SB(ާ HW.{2=eP<+M ;'8};4̦]Tr?aޯ-k;=X4r4 fK;e3i,lzҷe~o&Y6TqdҥeqGbs7$O ַ!!B'(!'DexO;XI(O ;BG -Ks#H+{G41ðrK՛x,iP?ƒ2 Z_1k_ކ D  b=1KbcTTj=1Kb0bAQK^?V+ڏNJXQcJ;C38Lp߹Ɲ:<7ȑ$6HoqJCdp[ 8W 밄p?֜F3f4LŔ4fN<&H$ǂ1Xc^i2+ @6Ɵ3ML#BEW}M\8OchW ֢#.IiTv&96iĒQZrXMXkҪnx7cŋtWg~xy!N,;bҰp97>BssOpl1cf='@KNRH*/XC|2h+2=cA+sO~" *.IZ> .dw5hTO漠`S= '+,nl^^t/@ɔ>* c!Ps|L7G+yNPc_1⨁ev TNrRj6S4xK=T#ĩ?/ŀ{T!մ!?%\mZɏ5oN_ʺkf¥W'V"Q8q2MlWG1 \G1 ((/Mt/l9VJS|i)%6"!*F 0&%ʾ|9Q Ҫ,׽,cWlMGfmG89^䑙g`;B Ʈ݉˱`SPHbn2lrAYd331Ϡ톰,⌛DMEK ,osOm,xCY[J !Fh~-qŅ{"6F[*4\p4s [ԇF 娀 ~ *+br&/WH!iƻYYbKYY 98"6q~y)VqM:zv\Lk2‰< _cD/IZ{fL'yN/!4׊UV6\ 5R|뢈7kjGH6.\RM>j3WZPF%WhSkŹRҥ b_5 "3,|M>WNH64)̨ja.!@^mI@  +ZU.6#e0F.Tb%Uy.IUO)G*4Mr7/|z^ǃ8V#z|tCVF)܀b:o]9@3] ܩ) 3 4 8;%7\\ 23O@1)ɉfk5ipZ@ p "d쎡VVL/AJfDGc^^P)Ih(vsl&U ),q:ѫha8u#׉ԩ%W;~"2yd1K9@wL|ng-ۚu+unrm(\_^EqX8^=|dv*ߖhUl͎8s7"] ړ Ōࣵ:# }2 IGMeEq,oނ8l9Kqb6 S[B|#{]:wY!{G sp9w; !"-M'w<ùpAt1 n'cOՀA/ 9`Twq61`L˩^ ByWrb'ע,O {ƈ4..>I ex9Rkb YG٬@ `Q ߜ*A<]A8"wpGtnψȏ`ՂYr+'yvZtbt>5bF`Ω>w;GsD;"wT-RoP#*4\'[MM(|TXMyQ4ܷJa$u!UYކ-؀a[Y` ~^MijRU!lTe"ܥi4\qVMD6Ռys0QsE3JIqGbF0è A믌GX>L)J[a6R-Xe}9B2J[8l?NV) K%X=ѰIΊC"#֜j)m'KoNDF0#`\\$,F xbq:KQg1|0u "ᨭ<J8% Gqʆ6N(G-uYVLRxaɘ|83M]Dvm\iT䪝ߔW\4 /:kܾu?r5]3.:Bu QG#f'j0Ū⬪οOt5WunN2x%dK'p/fnҵ DKi{<tlGKUgp8U׀%P.' NF? Ɉfl[ocG? XDq[͓hQ!?ϗCN)Cm8[*;n?`@ "X4.hhBV>pF,ûFj3[No]׀_(MU!r xh%n⌵cWnFMlʙ)h dw9.2}>)*(kaڐ{4̭bFZ[5zj:ӘpRqs4 լ hȞ Zv͎?n Ą?Șj ݻdQll*)h]q~(jɧqT֔VY~bĞ =i6EO\//v%NSF[Kf   ĜB(ǬGVjÜ4 `zUc& |` x&DGΊXîz~f(б=PyHT{'>gɭ p8qefcIܠ5aϊd^ u0Eu$"Mrc(%@͒zk\u I7{ %|mɸ>(f;Y.O3xe*n'.(0̥ow?6(ZiGuWWXB#gX԰9"ZQ [H"|;%E禠}j-0mK#W$MKڇڡf[DS#QSN(4r;?#|Շ5p6Ɣ,á#Ncy;[vǁ0(8{`swdE_%m=Lp೻zp^6S򺚫;1P#Wa5$z0C؉.XSe$祒Yy%q8(ǂlNAhM_B>9 E`./(Q{2Ѹ% `M?W }'hnv^cQT}6Wz=3g# v_ A4]j}_3=ͤݍFB &1T*%4A92Ò^7`YR};IH۴85xmr&W@Px1^GOnj%`X'ݐV 3Oap,z~Ȥ*/G#,( 7}RqG{|>11 C{KX۟~,Ln5V#^Xj,YN+6rK] )hdG$Ub$Қ< A2,i%lKӺ/E63_@$f0c(Ĝ("Sġ C(9 &5MمM(dFvl)Ʀ)İ WX<s tdrJHqB8*ڃq V 4M2/hկE1PLi!]XDRP,$ QʄˑD, >:v'(IAJ!cQJQb$~IRXW DP$33BpQ %?3% Mu4m'I[VXZl-wSqKOv7; [r2g[8D˩ }?.ɫl<T!:.'zSu \HICqA}Ue_ _ۑÏJfKm7&ڝNNlX JDt[OA xL!썖ښ M{:~"ݩLL Q @{z"sJ3zG\Š Nmgp? qP 1u QHИs*$G%(c eT@H AJ8sa)iht 4f/j:vP-CM1*Mr-[wQ8|+|gR:p>ԛ{{!TO9qFV;Ft`#aӺ P "̢())h|Ռ BHMHXw `Lzmpծ(Oµo=vLk>W#AB&NMRH$1aSLhWdic4\x^r ;f<}VɣYfSq u&_Kvy #-'s)4QmEX,fTF5\d ym}'vXNYMZKϽhbݺէEG-oOQU'Ԕxo:|uby;cCYKCAW^xV\oxAQZf칼xkmjQjC7F _qSŲF6 ڠZ.E2kVƜ_2H*џ$kH%3 > _ )\Q@nzmεd.i}| ٬)ۯރԨyϘZVI6T .:Ӽ6FuC%᧡qzu#NЂO"9݈< XDٙfɇY灗Q^qldyFtƆh༆)'*eZ@=)ɩj^%xɌ!kIڃ~~I3u3+27I$ vjxYV7_b(IJSU;<ã;*փ((Vјꀉ,ΥU뤺#;CU1dg8@^bSVtrr3_imHkinw$-bL+74Ifc%q2׻E;.Yh4u:s9"{u=g[w\0#V5n@HѤ p>vꐽBrcZ|>Z,]`ZvMF6euE%4̻rf!`(`SwzEKz0ΰk$cS}y$'.!_3cYҖ ^<Ѕ/\֏E0Dv;yی6 G $lQID!6tLq3ُ璩BоSt/NihOK. ev x:ipwɅ f$|5 n  Iy*x>i(otMI9}׭׽(/6ZSͮTTkŶ#Lp.6 ;փ+(c{l;ZYkck\ЩP?#~j|;D,K ew;>/t*WA3x ^ĖāāWq&DۤLj$ e&xB()# SF4_(8iNSxZPwrbSgƊELIGd*"5@DP$` 0ix,\C&um"irHe *5Cd$G_6&f)!?t=ݱ=~P)k$(CbV%4b F\1zd&" ŐD'zJ1gpBZ^~*Bހ M@ 0}Y I84sQi#r2yF {{#O6~_ÁfU+%%N(PmArϳ3*d S,-aT3 {i|z2d?Z?;N#n I4rceb>zHڠE/%r{EKh\BK~DBp*K@ٸdlNd0` psUPd   ¾ް#CQls JEQeh"&oPc eFFR@D#4 YKew)xϒ+o.N/.ƒm\>ܾq*lf>*hGg0o+om$螙ow9ܡ23N<|;<7cV1[2 4p$~6%KT^ߔ"At_hOߎ۫Ÿ[Dq6=sF!=KNif?sf+\Y# =: _!G`=>O/]̼$L<]'{*J<9r1frӣ0Gq 1\5tŗ%{)T5I܎RfsʌgɗxUI|.)Z6M]^ Гғ*U+#wڻ.|Y3[;@}PǼns.2/x ; @Wtܒ{0,hWN nm^xr_Sh 8KݲQ;jĵmqf, I4džy&ix';ǒ )R Lw73Rɧa.'I>`6vfj/#ޞ+Ɣn+֪8æܵ2h]rn|5fmS>4䕫:%5(w:E Ɗ.L3X(]e#fW,=d}%zSF%vghE*Y6^>_8|1Eˊ|_]4f\}ȰbŐHF#+N4v:U|-WZXdK Hq9oV#kZv\(*,t1wh|Aםk]׻e- J6boᔖ%uW|8 g͕-Y%[25#_˳3RswN=\yŤ&d0)@FPtGlK߅ Q7NCV4ְ0(d%ep MNn Atʾd304υ4 @7:sՒJ eJ G^K44O™ Z WQ8yl pW)P .cPb Jb*{J9{Y2 1 o*)bHs(6U8y2 `}nγgZgc|g&pȘbh`G1ew{6oVjI:d0|fGɟEWח> *ޕ0 w`}LbY +3䑑ۋ&_w͑lvh[~8"ھm#[~HϿ<M2cOF vqr`Cc5CPj9$5 ݋<"XOVmwZޒ+~XJ=Fh~K\&7O3in3LO[q ?@O*Mng~X:Lf5"jpЍt|Χ臍xϿxp7͎XX4x;k9Ocrh|=;JPXYr= 䰰ZXXQ{C 7s!:@7d/N;Bt:K`izrߚ wtgaތ> BrIFVd4ba[4{YnWOꝌ>}i]VY8Q)> "0/1Y$TB%[#~ |Iod(лXS{~aG\r%&ɮeK~Y/,:y+4ϲj. >K+f/}j_KPyeMO57;n nɝ0쏓y#*;yf57;5_8=_o0t\ ٜIdx P17/nNNx"ᾓg5nAu=ЍdVH^xs +DIn&^tN8#]+a]K OR~rx d]'PQQ G2%6򉫑Yi,wtF{m9j{0wU׿gվci+W<n݋;e݋W<2R1'F(H'FJG]d\uv‡J zu{JO8>.Ez\Rm 2(ۂ?]AS?~>h0.?xٻuwwH._c||A`?>;l|{v68eӿߜ}yo/.B2ڦdښ_xHXiʼ;?7 ?u睔sZ^k*<䄒a/+ur n_SmtB)`hDࠂ%zuGU s}ZA͎˽kD)$ݮѹTȄfŬ#ɔYqT0y~r6<6Y)=*(C:pk)}_< ȷmd?<lpQP;oCHobK{okykTk1'"6+{m6ꂀ6R 6 EB!]Oyb?~;UOE#m3 db2cO*XفҜP]x#'#L,((A9(<"Ps4樲xdRsGunCd^ǡZĬ}ɰh`_r_m'b*w4ys-5l)zftF#>J)mq_"9x~2L 86/u2#)(Pb[`veE:+-7x_Ee>vctҠ*.lO$Xv(,ޜCzx;פG,bDoC%4z\B%4z nǗ#PNVXf&TkNOQ*)9%r֒_ 7L|iֲ|R.Afݹ%ȹe۬4woBhr&=ׇK56'RdRXS{!^ W$[mt43mz^1fwܺvբFa.׎Ҭ8lc̍CSIiLEh sd锼L2I2t(ѫC6èj+&ݺX-*5†5dxX|V4XXI PdƗ9-$^s Bl^ ډ!Z:PUUzy~~{1&OQx`u"4nmX<. UƉ󋓳Y7aԆƎdXGrOzd#Z(6I0x\Z%?GBii m0ˆH]2|k!kԊg|p *>l52LcV1+ y{e (ƥtF*+9r(z+l"ah!a€߷ ׻6Cl[qCϋǠon3':ti2\fɞSs2Y]hUfk)ˋG_ (K2w=XwA{@$QyE*|tLKYƓ0p!/=t*{:(2{$ {-2T$6чdH'XN.xex$}&!. d4H)ÅI)ƴػӉv$4=: EBf \W;C .+$B0یo3RDo9rf^/^<.zy#d&P&E+9 s%X49ހd",cja2rE vup-Ea7+mPH[ v++ jA@l ėi]Ĩ,7:Q 2!Qe[A:0`&^F@;$ $b1L(d3+X杚GYS8ߚC%R'B6.;I׊ !"E F_]ؐHɛKkZ:'fлu 0{}3/|L+K3wIRjvcZroUM˔?55wI)0KZ.-I_\D3hv_^@vrv\}8?D?(u0/gF~bm>Q&C-~M)+*U1/ڀ+(Tv/g&yq@X5jw6#&78WAS)HKgo^ CXO53gCPEfl\ޞvЅX{#N*;ͺg`dEZ_HrB9:DoxvvuӁFA'(rB^ Z9ݷd F+cP#GMYcdV "x1Ya=/dnN$x r1}8b; $mQ\L@B|X?%Io#9;%]',3jc:%dm^+h毅>J_^j$uڣI93#EWñd|.r~Xk矿ʢ) pLYԅ&DƽK5 <cONSS8--|٫M;;x 򞺰.:&m+EP]W8ZH[1SIH:ֽ%EO|IQ4'UZuM& *?!㔕@ٵSМRсn 9emIш]sּgnh g,ֽaGuk!5! Y"GgsCPbז)WT턁 aPdY%J{D 2K/j6SwVب h:G*Dr 0 “"$:1y),TO8$9fXWq<_ŁU௸(!R {W")[4`cXӽ杚iVSꄆ @4qog^:Ұzm930>rH8+*~NqXtWՋ2$~֭=6uzB}-4ނ_)!yU^J e׉ʼnn9DWaH%hww11J!huAo F^r`BK 1<@"ex-,upU[p.Dٱb9ƊVV-8j wLJ4wt:Zڂ- m|,R1#bG܁WaY| a9 HX,e΄ӈ:~v US6#ɚ5D9FfMpf"hLc.0^r5VwDJ 0Th2c,1i΢Bz$5ºwߦ C\ɴ&0uBhuE5P/J,P0; wm[m FuEF' "ck)1SApG)z yit;<A ϶ի~:^x?<^,ύR^aW4c+y]-FZz-#w[@*]o*>Gw 9U1YzZ甉!XpQiw2]Bh4{15#NIu/Lha tl5\m8Sr$nbMXrcvbgFWs#hfR{ؑX&K {Z9VTB t0`R͆xơP%~F862}hay$TQPFy *e`Z$9CAL*C*ĝZZkrLcrm?@“["T*H4H1{e(6BfS.D>jeiԔ1bjm}Ä{8zJ\<# 0a=*UyL}ƤayK2D0Zy-fӲSt0Rq 2Ldʔq,%!Ws]ۛ+SΙМTf wh ;㙅vϣXB.nv6rE:}X=f]_(!gi!ppVOw n/Y Vt/@,FeHv*'YL[C!H70W0nve/|w詘5 ټn|Sq>hlEdMvrW4֋*Tߌ2`;zlp'&O/W50yNz1ނQQjh'H;)i-kԮWE>63[/Co&So0tS'r Ofv^ n)ֈ)n=C֚G'F(|S"H؉N+܇u:G1|)<O`_^o',zp] WzBPX(ҝUh֙zAKBd_ 4WZٵ ja؀5- }Zd=RaeJ ZBs)h'a{pIs^LZH-zW{ʹrMx wU!$+\Үn[fފ9JC< έX'b%3.fSnj}St8fkWڑ6mc%yЇr? #0q NS'.}~L?+Y>՟[iIVJ\e4Ofc9Dpyǹ?.Tz޷e*3"LrvW.W Ңt"٢o$s7˟_?8J 3?NDkVun{$Yftf 4pg+^@4е=)|7sqR#Qx.: $eGQ?RU1IU!Z+kU!qDWhCQSTU*D "*D@[1%yPd8Vҳ@"Pdm z rp/=2F;P9>*D !U4#uznLhq+4>~K3?R8`ڦ1V Սۜ{]%fr^W# ڠen>pS]Uh=Ya79^!ѷ~UXLno`r)H=^rV3 eojKFE(Ϋl썀0۴}Nr/xmiaV3D+6?;_mҚfK~v*ޟ^iڅWݥ~tb)^ EOEm!&fRT6C 3_ ܷ+}!1f?W3\Wg;ǹCfUow;~ z6\@zR/޴KD;Ѣ}KM3gSߖ}MjpEExCVp\DmDbMzWF fAL1~mpg)h)"QT:1 &ؗ;m2mU'2;ǨDmų3N~Btƙi`!_Q*uvi 4!*ۙWgK!f-`^ :Y^H4bGl k9mf}1XmkDS4aՒjB`X+qڙ;6>IVݪ9QԊ"y%l HDtemUr]&Hf78Ǣd DJjY@0Oۺ{C.^SN`ԇ_lrbO_!c篖}>vX((& j$ޕ!2DPW_2ŗոz/ӚyWMxPDQGu;+LZ Cs"s+mmjuo[Q׏2)4Le54ꇢF!(F.*5qBQ‚95ksQy~m~m`SG)Bݓm+HԲnbnD|ᶮ^Θj%^> D2'ARp!NjCGAxK\E"S!m]RH1j(3lޕud_!ARC=k;AtI^F6aTο]$EQYgBې)򰆵v] V8\Db,[1HegPY<A4yFtqZ$[j!D-b\hl6,,9r)kl&*FHЍ*H9 ;H[+%Q$PyRɋ*;?>BcݾK#]J@>MY_۴pyx~t-&T9o~z}_M c)y7-w/oÛt=Dř#}&R~p,ݻBqN1{f:~ydGq3"4 ׋SRI~oD׻JTJ*fqq] eRU#^. ej8$2C$R vY'!*I:.(QH>AI(85D( GjJaBA1 ş T 9?XT&`y&ߨ_lBH?JtJк~ @+!BDljm?uU &\l "P=zM('~D\HA 5/NhN0`^ŪK44<]¯utg>/{P[5_>rJph0 Iި-gv8Z"1ˆNY =!KԤ8l$Q@֧h<}[\1`)G EXNƈ|(_qM Ai̢J^ƥ,|wMy8F#`\@iLdEiq%.ٌ*qk'Nv "28 bLXN"7G ΉV1`)D/gnuD '9yJ`9Ğ [dT  GAHd5F[^)=/ M! p,9Ü (c$`@K5MyZO\kF<m s*'xx%'Ji e+C4LQĵ gM5˼!HDbDK 4pJa8F8J-@WQJV81Q/@_ZSƷ@ʏf毠*sY8X/u.ptiCi_¸w; JRh'~;OB"[[ooGOªè,,>/*fSz/H%;+`<*ǨĢpyTXmVbwB_ژs¾p#ã5yx_pZWa$_ݚ4Ea/z|Irm=~ti|I/j"jg(T TIvrIUvUh 1]]6#fR˦N8~B>ip@gS(|ᐞ HgzZ4(ٖ̟ݜbP(n ՗ 5S] o0 1 =@@s\`[Y>Lmm 8x}w>}Og{ϦeX<.2_-Tozy! ᇲ *Ю鵙~e}( ]Q*˰jU @p4C! 4Lcǧ.k45Of iy8sU[5t’qpyL5vU{ł; D>Yq&߯8s[ߵ; 1õzӯY5x; @:pi'wf]N-,P)?>M¬[*2ޘ:O^Mdx;K@ySpSRmCv8krh^tV5|.>' .21էfn4UZٝ%\ʓһ 3̅Xi% 9v80./=܇d [0UV'?1 yx#/~ f2%AO v7n&z qDZrsq4Z/wg[z(凡\yi^7s[{P=}mWM /mT;ĭo17_^Mv8[avEthUPf̖ˆ[q+n'TujE}|x]P;wǓZY\{=:\v rԈv]{ RXxq uBBsT{ q;=Zs'q0G邖X=@-ӲVDuXm=78^[ܾ\ k h?Ehvno[#Nxnnk/G[#І6TLNd[ttnn.FWT˨"l51utsC3vѹd?㒣 ר>2?ӊoz_yE~!=^8ܸ0qa2__lf܇٘&^)1II%d%ƽK1/[ߘ4@aR ېrsƑd9͕bh繹䇫9$M)n9I^h u$ ResSdDa0#)ȓ<'-0rX(y}$9aaNLSt='Ɇ` ^Fu\N]SiIR~Jr]OҠ:&IX)J6% ;I6X30Gf+7F6g;IN%90#aZFΓdIVZ'IPǦ9he=)>V4VXլQ/u ~Bm/ç @۬T G[}x)5VΚ퇡ĵCi4m`|0C[+TDckDl@Z+2dZ',U4/V(ng6B5V0Qei:[h|@ZM9Y+8BTfJzZ+"e5zBHu& jc`5]0/ \"W+Wy[m`:XmG{%! /-5RpVuML$&2'˓QV|,,.kq>X~-|o7 :?$=ݛFP xfn\r!aI8j-޽ zL.ǹ_| G77r烏"E:I96p |w7.|K^t6֛&T t9k[8PMꘒ8{BE 7ݠSVJ:GGhd*wP-fBrZ9 5pΓT*e,ySVYT83W gS^U!9>k q~Rݕ"@vS$k~ N QU;gSxh A,HLeՎD$@f}؋a@\/?%y ͅV{sw=[7S-SN|3i@Ga%t9kJP˦>ts :~|+53Cxz\~|y/C+:@5R 7!ǁr” 4Hd0@%&"ze8{r{E2 `K$<)a#x|!| |2i/r/ߕCo_o<t sx;?s#ZLt>qpLV^<( ZI籥@4Xi4.@#7"RG#Ac+1Ql˷zxVK;7d:o;Ej2a&|ۛ~o zU?PHs&UO &I^{3D:d5-c4(qnkRV]ʛGzn}̘mhg~/2JDwqEDR ;1"!VG⨠DQQ# cӂD9ߘ;{&5i)il$s4&Sb`4-!LPE0G=<~͙Ũ| @(-d3{ G0fY+v(qu}_L_ P[(fZkUDf`xH R1F8N`959  CE^4gQsFڂ<UN`-JN`I)S^h" iƙkW C#{L9QE+ L{ &pnqZ, M1ѾfJϼfۀ9uc&~lS٨I0ksR hzAʼnKA>ʒ)N}9Ɗj3(#? 2\#b-L~/_Wt|7qonsWw~iRI摈ň 3+pJa8Flr`rҋ$>z?g'n|w{,ޅEl |YǗղ:-O.a: B=NH")ޥV-{dzئ1_b3~A>wv` >b2?{WƑJ_FuPlx%cKaYm0aC7oV$nvlmI4*ʬ̪L=} Ӆ5#!O\DkTEЃzUШ ֭.ʈN1Xb- nw%KZ&$䉋h-Qk%& A|a-nse}$䉋h-"[U֭.ʈN1Xb6}[֭ y"ZG}dݘhEdZeDU1ݖD&ukBB֑)^mklۺ)`u{ U̺&ukBB֒)ZU߲n + VDUJ(F a{nu -m'.dWl[7.ZAѩ;FU:ܾ[mBZ&$䉋hLXmݴju˃2Sw*!+/aagn -kݚ'.d w{ִ*?JX+?g-(*F5Ap*Y ,dWQU5 WmϚcUuGjG[RTiR$u 讪jhB}ox u.ި&H, ]˖6 &aeֺZ>̚Ju.֨&H,/Vt.֤&hBeִˬuF5A|t5B]fˬ5 Art5$'tY߷3kƻZYkT`GYˬ=̚ˬiR.e֚/iwfˬ5 L2aZ$qz5 &I*O.SC僙LV˵D,6Ћ&.Ǿ?k4=]4>_kH]Ui)]~2z>!&Ds+1Gg &Z(7CG^*C.+$ V=eab6 $Bp4!RRu.8>5 2nLM z,=~!Y+f"JZyQFL qbR. 0we>~F`Y"9:5ն!hTZ*kPJW(2"9A2jδEA&a#kH]@y"kЩhYH Qa vDl"J Zt-g>BV'AZ*(M2-Yԁ rʉZHXcv~u>40rg>|#ڣet|: ƣ(!FC5/e>5pa4+`^D3syQ*"Hb#'{۫&R{·C_z)Lgבc/|4w-% Yf/.sDI*7yėM%0EPkԶmR[s}*TKdc_ٮ17ྐྵa]&n<7igdڃA{}+R ga~2ΦfҞ93lX_b17>FK6mO ӨL|bgmKqa S's3 nf 4RlQffuM9ǟU3uuQX[TFU?vJ!X) hW 19wh+5rx5y0okM~t|uKKۮهi.NgG?s^+OB<*3==&5I nx\?|?NC%6`f m8 U5N|9o#\X x\Cpݏ9!ļtbuvqf1>ajOu94ūaMJT Sl:gout S3PlnA\c$c\: aR5+)u>z;IGOu5ݪϥ x醅j˰켌|g3'LY1Hn,c1P$aE6g~)[,.tB^ Fi¦ N᫾?g4x>ayuqZҍ%2ߛlsm (zEQYsN.OaXڼ<Yalim,~)J|61cz ;٬%Q#$ӗ(2?!ͩ-DÝ&2~SJI5n/Û{I#L}*ԏN -$C 찕Ee愣Q@I#ir!Z^zV[I9bJ9Ys;q*YnV;e}&(!_HF*R  @9ay0 0 DDtYmΚQ5%y 7p:ufzj˸OobLJn8ƌi6 7G#ܰe}/#Yī.&>q1 If=F1|C$yǴ!L kZU^>A0"-Nc/p4T4\)jҏ&Uu(GkSO!=Astʒp01,*AH{HG33QjZ'q "cfC E 45$M.b"NQ b%hgB [q ¬ BGۊtQlݸ4ZRGKx"8+Oւ+ H'{4 [jb]%:Pz7f>RZ,᠀ɣiS0ƉxPk9{D{W-൐VR׋6kK{gd^,OV0֟]۳ hztD<:OG΋G/pg4Վ l1R)R"jPXfH#ðeNԼqywOkƪ}݁^dr˒1IJI'|%,3] :ߐQRE?vФΣ5Nѭ-Wҕp][˕,[هayK՚rr`K)9ό xIm)*[G$yuM:\+K1goCpm³u0{NKT>"z=ثl͉-?;jʓQw\,V~?d8}#wor+lOɦ1L&V[a=yÕFe]]8urۗ+#K`RV 8uH&o~3T>#0͜93VUR AR0-tk*df>bX`:L$s?9yf((`*)ɜSkJXǺu52jn=}bߙ*nvTGR MWӍ!۹ջz$gppΆx)Rޛ`ABJ$CG"KgC F<4$i -|(aHVaG9`ic#B`K@$ Qw $\<aMa+y)?ei) m?,l1-Na e#!r1Q@F2W#a<%?)Ax *ZNj}'reT^{cAi#S9ߟ2TB2[Sˏ9r2Pr qT!B piip4JU@3!o<3ruZ>snyBrhLrZnUÃr>1%-{t5*Jf*r*HSY6iF?>F{MIFs8FH"otde:N\ Rg>n97᳭ BOVw5/^109Inf3^6ϼn6 _gf/*v~.W}~^'$@O^}.LK0$6x(5 罃6ODl&PM MP8~pkj aɷ")g;bNiAyӽ.oԠCco!W} }S}p6WwO?߂dz3}7!28=OwuBj½8s&\=bŪ*UXr*}{L`&,Ҟn:q';׎2=`@uep?NJ3E0 :?~EHjSp>f\P^m9oK-e !ISoǩJ"JRDK['VxiU/JW4Gk]!w%&sTktCnVr7>xlqtN"wmMnLd_jIqsũ- .ZZ\rCru4Ubbk4Fw`jUq2a^jȀnCw4Rq)F9aﮣ5_.mEbCXOgn w?^6wRsPoQ-E^\~c(&'YKizjfnz0d#W?!Ӌ^lʛdo`H_gW)Vǣ^642+W}$br9їH#A`&.~{?ASqXuE.J֜+8挈<6ִI0DNLMM`Ol;:yNs4-WOFK`:`eCp~º]1?wbs"ʙT2W:М[l̷c&g+uIV23tu7\1 ^h_QTz8S}8Z #^5B2T-$Ёvl | E HZTJAV"ez{`e[BEU3~kuX 82,+*isk *I ԡ NHl8?/LR0`PLnL^Harb}`(:M+Ĥ%/a#ItqgIT6!;I|nYK1ijX FNL+F(Imy7#n9!jx&IŽG+;k+-?[:$!Hw|~f!,:raMDZxGy*xA$i)&Q:>q>ᬲ{U%f]vyr7z[רꎃGQmj$nϤ)Ib;C|}>8qُO H~L3Y#=6`%\3sW`0o7m;։ۡu*5HǣfLqI 6'H/v ,~,ItK&VszQ$VmWh3ۘ>j99o-xkqE?똝k,-g%ryR*n.Vf~g.m Z\+qmڌU"G~ + 0֯~i@^~T J"qH]\/˄ctYj6ĭZ? ?f1]M ^M//.ø[hK~ן6gռG+C]Q[fi/᥼Jג[ûyy8;{èX?+jDWnyӠݦci~á8Dwdiu谧k Оy$;kR7\i$}8>.X"R)eNo޶X6w7z}J,+)11糿wR0⼙;X$UWqr^ėFFY /rutƯ3~]u `#ŭDŽ(g3 hJL,"AT8w\?.[Jœ[I"}eLՓ1.=1m 5+ߙn|奟lZ|c_\aÂHR0%gӜX8>NJ08|u^r`T[j*/CyIu/xc^sܚ<~OCjINؿ?4̓DҖw<94IiłHQ=>6)QڤM7Jg~O0)h,UMx t$g`/>d57"]՜ qgg+JwΞU#B*F%|L{=wE]lÃf s6,}SIuG}&DHR=,gBk:_;ml()'xj1NY9"VߙDh!OLJT}Tϩ#!iIcZ똖:%%FBJAhPHK甉#R1D d$@Ɓ S։IlVm_.?IGha`"T%33XOOLtTة2:_] NOʤ+B;"u`pYBk(<LqE(YhlVӳk%[^r\ֱLRDz-u,zKd) o&88|[U$Sr vmdyuһ̰5HOwkB'vU Rt{9Mʄ ,* P QZK9F3Bi`b J)9 !NWLI=֜iJ @9).%y%w<L-#aT.: '-dA]kG X0+$$b̃i yFŧwcZWs=-5+49-w֜oq^!tYSMmaI dP|3G}`pѺcMq7޸r&>7TQ⌬.je忯z Łڋ3^kM:-)1TM=#͉Ŕ *l] l0p*r`"-8+ކ~ɕPߚ V8%T)`)L/%o|0N # M=![*e>ԑmS̩ $ RO'>8[U[pk -Ryef IpĎw-_;kv,UuF XESYk7h5u4F*M,5fV-FkaI* `"0͐L"}J1P7'R$'%49um &;W ӣ veToPc.0^kك7Ǜ&#Ǹ4&m2KSxºTk -OE)|;MzhBlA Seg0)`t˜4AF%XC0CRMdTui8Uښ6%"$"x|c\JBͅэ7͗q9/ V];j"uwy\@Ǐ 0xa-~||>h'\tE~7Vw%)=)R;캰^d/Hse(Pa9e,L! +? H#hB$IEnǶn}`׫(ZiE+,E >]‚W8++$n3 R`.Ċ# B+umJA@5>N04ЩL1Ğ ;jϓv%i&U$JurSh'0bx ">wߎG ST620VY{rY8E,%-'>gW,w3]^\`"/bnqһ V=y;{XXd4 b!/U yp^޼TBFrq[![ 5e?j4hSCΐD6ĸB: $vիmyٷ~z?&AM.ZgmCP! MWI6s믫с04l6Qb T/ ;cj.րAB!c%$*/CQQx GA;A_=aYZ(?gafpnLbk9{ndM'AiA=e>XOFK<<.%AkM9+ָ\>`=z-iuG<^ ş^j\3UIHnU/+!(PD5w$U2PE`p1YXp^B1-R _gZG9+1\P乕FS]e(GO a(R!p`[t=i&@\gy0:qϙ,^ӇǝVhr6P c,ҎF,"W2jϪ&|oaJ5ܮT8x8iKF^.>|x.6u KV.Pk3>7Ī ]ǠbGI'uUџGJa)~_wY+ٮ ]-^x9_N{;4o|CI8zr";wt*IoMA3dpJ ؂%z5G{,4ޒ顭ޕB!WӨ> !0~'vQ{E=4iQ:z6ޔI/s6oÓ-)۰)IXĈ=1742.7 w,Z]9!KMmR'1zssGi{7$ygb׉UشԮΝɝ fR$8:M\I8凛+wPu5AO 1" D(X`xk)LEP#Cdה+ӟ5F> HkݿSUuY`&iG, 2L:t۳={\CwֳdPr҉5BhK?FW޾8iOԸʭ5dsFH1O8,Rɞ/jךM:sXM3?Zqdr՛yM|vxe~xQoqKKki~x9X~]Oˏtdsc=vq%kri~XCh8#AKOx*TTHBEtMS<GJm`ڙr0=}ہ&EƏAbʏcmB/_KC95,+OQQ兟ًGb!<ҷ0 H|][s#+,8lsp9varUjN2ɖ hG9BR;.$%4FI=/M;`hNyp?[K=YR̖u&y3̤M`̲n bR}J2M2"#x񊢸Ǚ5d~)YZ?{l( `}/\=:ZّVFTOWzR2IH}::. !SD@vvx(b aЍ~7 r;`ژ`dWׯWG.]Iny.>0]gyXx;AZu`" ]潧g,$$T~JP6XQ㨕 m6 b&7?D]NB./㛾r)fJ3WU u3'2c)M")1*&T{)NyAN(L2[-Bs24 20}BQV']!\6/my9?.zTn8Qӌw3*",)rTH;@gR jS6i~v%w0e<#9bJ'-v/ KȀ\W)9ՔqD9i5GU"{SKpY;r(+F"ZQ:eR"!riB,΀2) q-(RH.SOT cf*e^5ȳeLID+E4 3qtTĘ 2qTR!khH@QHmPȞ'l4݆`YfoBBdexIq$Ƙ'~=of6y"OHQ6LFYv#`/=ha PJBV]ri+H[&M U[l5s9ݵ T%]4{xਇG4 ydy<:JOw{Wa2ZBxQٴ {Wֶ&\ʧA;\NcG[g NI$ gw^xxywS2tZj֬)w+d _ _E0qkiRHpV:72⡀!Tj񛋔f@Ʉ6~aB8շ/׫pkք;K'ai$U\(?8ID__x e>o᣿>;OWO$-:{g}' K} O d-`1pb*T)䇻[?T`p:%X6z̭lS](Tn9Ό1_#󴖦_$bco/non~i~f) 7rc#TMSصD.֑?m~ }-W_^'yW@n0_ ~{X?g`Ld+I!#x%Ck,Dt +nNmI,W[GחYBJ,ዟ}S \U@iDj1&tE-p(&G0Ӈ9Df/ (PU{ȟ6M=ʿ1p{Mt~A$x}kCT䔖9^9en.b]:uh NӐ.FtsΓfU 2@dw20Gsq2#rMܔxg,މ9 ))6Q ON .M3,'f(qޖf#Op.CB~٠G\[Ԯz/D6RoSشm-ey7HE)[b{V%ܑ*Gڹ8HwJkyC;LXgJ`޲֕`caI}F^ANNb\ W ruY%% j:t}Wb%ߥoOKBJ I|*Z (/Q/%8:SڝC)[9+4?R|@g` fm)FjÐ_h2XPC%IkXCa]yqu%$QNE02~ ddb˧{T|#xCZ%XEZ:@uq @ C]3RT9E7*YU 91$EysR6|y\ȞEKqy\:QJϲpuYxDHw=|4ns4b2e¨Rp~|,Me{*fRʡJ5-p J`~CIDtY˼E`:뮃.",s`h@bfaK 7%&;'Ȍlo C4&z5Do}D^x5 Lvwm,1nPn824EtnbG EKܓXt'6tq.֯n %Gӝv4 ޵x` hJ5Ki2'IK2o/]mѪ)@`4i vNT+ԋ߶+Zu{RopZ#%Nq*,F잎;zt*4k4vA5c߆%z0]mh<ڇӡ"p?[r$L){o/}ыwyyۼjV$yJj1;L5F9Өnܡti!#y˩fp|IқY~gSyx܂mma麢vM{ںeOv Im? 4=: M߱Ӿ ;ŸwN{da19 w;riӯ [^||'*r}% y&8*\?וdϻqY;wk*i:mrhoޭyCHֆqmdS#=&л5A4}G6.BBfkּ{wkB޸Vl*cS!%F̤R)GV.e!N \f5֙0N򁲅ʔ(ͼe{fDD+ oF5~)d:XϡV;d*)EV;2띫V;pBp[+A@7%( V; Bp|oc[:7 V=4UOO'R34ʈDTYR2T:7ޟYᷲk5Dy:bCxz7j0LY9:w‹=*Hͱ:b=gw=0vJ\H)Uo7$#ࡀ!8>p|b$ɏ˗riʮ*aF BǝVj.iϏh|gUM'9&JN_ųw:񳍿hT~̴BvR'X};xfOTT[Ty=!uh7afEL@0> ˌi529pżj,޵t% ' x9)YR\,fe?Rdڢk`G"eWh`ISbSJQ s4Re/gaLp7GRQZ8򎗢~'Ug:TgG- Ů~Vۣ̏Ȝ 6:`Mktg\"8$CIN$ϐ"Li' 9򎭮J"eǵd9>p:$Fw2k̸ dczN9(L6R(`ڡATv$1K1%"3۰=88K )E1VЀDOP>&˧A㷿~oN*2pgyR4E&otgz/ڒ 2SdZ69E9_n?zz .(i.h{.x݆~ L#ЉrY^f.|ӿ]lY1wϳUp=p>j SS`O~Aow0>/?]w}[Xؤk^m2+b*W6r=!L/YYHEV,NlWV 5`1pXYUءzA%` gZ&tcƛ@̽R&7=04,HGg*i[|Jg4dAueR9䘑hoOWHgls4?'k~߹twws=ЏWW~Y5\>CZ+q*~c*hT=FI᱁}-͍҄3{01RBN*p<H\3$Y,[YK6?4|@ I#gk5ZZ[X<ٲ1C4:y 0)jv¦@!rR*&6MPHG~{yg>@ *HFE&]d@`%WrP's0EpS'\t@|cm|Dw T-Wi#FC0UH%Jo5YE|Zte ܀KPXNFkX'@,LJF)K.،GܮibI-KʠCZp,%t9I 3P`xm<u<#*& D(01.y5"H.#m9.UyBK OF{B3 ^!4GAeʉlS`6 LJdف}Iz5U>&>.8N99$: cbb%'e%īܛ`\W&r#Ȫڀ.yo$TRc鯜3A|YUfK˲v$wG_v."DuO`_}Jrz~`;Y,is˵LF?ʸ ^LpSgXitTh6ק_,RigUL nY)}^ںj/ɛ[=Pn\{,moC>mi7wS ȧq4| aXem ޴Ƹ_0>wTMѳWux)30!ف${`A l|aqʃHdn(nh$q}hm4ږfYJeHY7څT8K%Z;~ŒA5YH?] kgR2Xb`-H0'Oha9jhlCT*|{$\ mu;k'~0"HY>eg4: eAĹ FV]0i{HZ(Ԧ/Qt2L-45-p#lmr5 W} (^!lICq dR`)B8clr!ph {hw>±=W1KvCX@yR @@٢x8np2k&8i%_9o4ڊFZ7ǹ|v q˗pL%f(/@ᘺ eɶ£££6{0sLY"RtT&%iȕRX@%uuwڐ/}Nz\5g_:R9,.m]JU k; ][m+QpeZ;Kv~o2иnhsrk\D?׿UZ6lj|9ʥ?Zqir;1G/5Hk6A '.{_cO˫&8Enc#\z2KBu?- R7a( 2% V%1װ/]OT1o ]iinǕbS?s\ 7- 뚽_uM'?F sRN qȟE)aGNi +ψȈ(qrBZVfFKz^aJCraCY8\y"xZYD_s<5eOOdPgr_u@EҦ@*s4@h@Qw'uTmFz/'"M0[E @#m &,7xY7/2ϒQXeQȥ?D)&`:qnOTNzk&&{h Fe1a}#i$*xG2VpO~o0ڞ$둨%{qz8.~Y!#]S5 8.z Fe6A+1#xK0sBT *("BxA#,!x݈D NW Q!}9~1YFz$ Eb VQ)IEbjZrىt 1f+ \n13)Ai21))+ _--%`-Zl0e=T^K^dGix(jSY 1VeVxd+j>0I{G8gj"Tm>@9@4Q Q(,Qb,}VZ?J1Pcخ@K_%ncu5ouZ=kQM%^"cx=` ^w7{=<ǫth뭸f` < >3 6 6D[YA^cXÙ@wi-a9ƕ{J:NZz}VTJ;d#kR</Unں,up,Y*18DФ QYDtRpD*#AG10WkɷrR3}rbdFdErpSTWءF+k3d!1MV٤%11 Qg/+0H}Ыwg:*.@Cysц[fp"0&Am)X[ Nu^I/u5Lk)PDf8`CJP.E]O9Mp|(ML31Ii\,<Ɛ8LUF7eif'5?}@@{?Hj8H;\5iE`=ڑg1_Nk2 -*e,MQJQ grF7lՐn'YE s\rUֹ! G2fO+պ@q~Bo+mDmi^Ph&jp1@w:Pl7ͿN4/IA?{WF/{ :|+ _n67X`pw&m9,Kbzڲeu7Ev-, UO`.5*]~6G@4൳Rȵ(JXCQҢ4Vbr,r`rpqfbx`V-8!ZhF=K䴬P"B`])j *m|7C#JIIz}b^ FSVX9pfjɄB)VkmRyUuܟ5sOR =6[;b7T(UFm,y3die$JI'kK)XaJG+mL() s0!t9 ę 1xtj`ݫ W(RP`  #$ j] YJi[m`BۿΜWNKX3p$''ʹ p (:}2)P 9`3niaxH>d3fI<\M޴s/۸C8Åܮ-J~ax] |~@c{yI/ :?SgRgh=<ȓ.FO8*RĮmܹo8ehXIGnZܵ Ӆ%c'VAJ?j_ncMIEl G]R[`wD^M*vlܶwI.*]σbU3m]%jظ6EAhz ˟.,3S-]x<}*F]x K *cܑ4c}/1 p}02ռ#ځ$B_MĺUkngFtbbݎ}y*ͺŗn\Dd;Z=my- ZNTJnݢKZH.[2[ʔ>?e}s*OWUdp=\,RGQҨI1m1x5,?R WkoZV"2~hHV3EKĞ_rxi_o)`H!C?3 OX(ʣdߨ ` kfdJ Oc"(G`!Hh*16 2tu8/V֟rwz#`$m'X:c1G8#|#dz㪭JW `5T wu]XS)jBY[T< (֜q;tk_zZ /oO uU]|y1Ÿ_VM͐iqe1-ޢ?;ba&f3G>ͩ0m&a/"M}L2}EöHt1Nr\{puE'wy7F!ٔqi! 977cAYdWW7VǗ騥[*I&2 sn?zSg(UFUv$mb> {?.wWؾL8kJߌZGH@1!?m>eeASRM#\0`uF sQx% 8Ʌ"AjOqAjMӪ^r6J+L ?]O߯_ .>8y1RP "^\MRD<^"GˋPSHE >p+!BRxFEaEO|mh׆OA۫,J c%4kp9SQ#4ٱh\-vY`}D_ Yƙ@F4H\m 7mdk n= !J}8o` I5ᬢD4(]aeUk˻o8s)]Ƌ'6Ƶ`83fB5Mo\ `!LŃ'[ׅ#N^?QKAxk* wf54Xcz߼ZnG\ȵgnLIk,?1{eIj!e)@9ٱs{laywc5q]OW7SyzGW8 Rrko$&mbvK {]I~~r89˺KNhţLFn,$jaD kSimJ6j@s7?b➐пX BF`״ O[.ދ$Tp&8Ix0!ޟ3i0'}&wNlmIUW_f"V% aɮGigWzʮ$ɿן6/5/ڬN^dR/%y;д)GCh\mmsa˩+cҬ$5)^t2ϫi5.>qUω+*ws/73`5m6 vG\ilb@琐6o~ۇ|GOF q5çW b16jl6 2HkR^?2S3˂e<Ӂ쑽ĚdBIpQDRM zExJP?lo^mjsbř9Xӟ^3EW j"')zB@#SҺ,Z.sw>ANց8T[Յ64%BYqj쇓˒e2z˷xJ,n/FcQ1cDra<9)I8˘.)Jr'+%grXnm(W(qBwCJNjN('%x름s#;X l׃fk `x 4;ܘPh(X8bZYi%%;X3Qoޗ-ޜw {w egU, %9L  yUk<0G b "30s&q\R4c@DFt ;|ZYLQ5/^E`w# 'FЩ!mMqJg8xo&}Xo4C =Ĩhܙ¼σGԩ"A;e DGS08T2)NyN\3|Nav09N4͕x۶mD(BZLUk[D,#v|y)w?yRKM mZfŖbSHLRr\B~N~IЉsġ7)z 7u +u PSx: iC|4 n;XXI@xG|W%THjxMk6Xc~Hس&;C٬4t1BIuWݪSN2qNRsvvQPGE任0N?i+8IM`r{ 9I=;aJ\qܞDoXH=_cpJG^o 5y GC8= k,r5f Pu56\Yֲa6ƟĐZ\u|@s?4tk_OK3 3Rҭl/{Q,lHyP]Z]-W Bs&7結 ANQ9I0μVxRr6.ϴkx& @H2*c2 lk'JZiJd-*+q/LAvyۦ5j{7'(ٻCB~%:)w;7 AQQOٻZc^f,b{89uIh1m4y;?AEF]"cE $/>EbX { #rIQEE]|81A%lڔu`1fhbT6Lc#@Js3vU4B>TrvE HxȨ>Z4]G4Riܼ< Ւ0uF3*W[)w;wЅ\d_d_^rI}c]SجMkD4wxC]H&\<݉D o܉ BNy~뗟i3pkPbص Z;^ĒJP]ޓq,W}vx^b)KORDExחow6Po(;fhZ< өJJn+$^Hao 7Lq a9ĝ_P 1(y c*L2Cs2IFI7ɝVb tzֳt/iѪWj_Jds hqYF\{[ʅ* 4t;W{_ksR9i+]aaYW M)^U*'W^s'RUYlaD+qCdC`+0& IQVi#x[uiﱺ^8d,1XS\?B#^dAkd:pi|H6|N['9]ufphA!f Pei8L%7r_y򅍥fhfR :'![ jw ͕5aLPOX?L-'7ItdIEZˑfNR5-0?v" سF,$$r aT-{VтCz_*t-=y|+GpS^L˳*P(Et-]_O(0)<T7ypXtsʄ􁷜nZZR 9b٠;m7p z V5'J[<P:*,=Et]uJAwrMR*-E},ƃakbp2cUiMi=u4!^Q靗`%E-␕6 :B[?m$u x[& `YZX2譺dT ?dkw!y+6h<fef"xR!({8"e Xc"P0+!TO2LAa(Fq@#{ Zp <1&*fuw+x}T9q0)^r+ ψb#vEēҡpiKbA}1zGɘu8 jit{=H2uΪ,* _ Zu bx!YHx! H }9N.o 1C;d#H~9d1VJ&BZHLK:j aN䳌 |ûgfxY.;4VkVsɨЫ񚈚A<夆 +lLJAl@6Ca%Tlfܩ f+6Lvfhڽ ᒉsg W^m)B=0߃\BZSDƬEujk\Rvwz|Qe^멒%~)鱖DŹo%N2,edge0IDYwiҟ&j~-[D΢u8~t8*kpI ~FoQ C%J!IPdM8q@rpWv+3&Oa7@iĭC.uSjecFe(H,6@LO)5+ Q8S&WQXBryB وq+LaO}<c4Q/"廼{dl\^rڸ_nVi ^ %oowM<ݙ( ߜ?4 Qθxp7mwoL)L?wtS3{ys}3XÌ_!1_:0J$2.v(g?Ƃ׿eDZYJX=ϛo|$!Kj<no}qƜ;ɝk0{u^tBP1,˻\= $}{>O0܃*n{_&8Ko1褸^\}2}\/nlA~up,;置jL>u(~aҧquHNNHZ-/X\fLION+xxyI[tE|4yiEƘI,0$o9Evœ mkZ Z#6*xRGK_Px}r7 Φl >''p5UFA+QlLEpw?6Ʋ~{z 1*Ts,HM[5>AHbi:!rhDT!kOc0fԞXGzBپn.Żt,o-xcL?o3w(p5}_S~}Iql W~=gJHttNz}IGVdqOk &u sI T(ݖ|HYt_/DzoҦ?rbb>~iIJ̴.GW-&쭙 wͤz}&n꡶4R;[, gz}%ivL(L;r_ݷijK,GhP':tzĞ!@ V 48Rr)'춮uk{]^ݺnGk(V.ң mm>`E+24T>*?`/Z(`/{5[I|p]ZbSؔĦ/tX"T|m$i|d^5rREԴDWпAJ+һyr%U $}5ꔍ-=]hl gԾMтm0zΪ0#릳8ecRFwF/jvF;9}CsC*_uZt-|ٖӺehjчt,J KVX`#rdzk,\Fv:#DO>(Ȓe.0Ⱥ[oF{;瞳{Ѻ!PFDU4E%H"cc 1Es RJg@|+-'20N\)@)iʊ٢$= ظwM)hJGS?nG2 L,b\l9 ;g!l'#b&LCg@ڤcW RЖv"D\+EWZ΄r)I0}yX†,yڲLt﹭ oJ]xS›Rt{^<-wAm%&̛ ۏD$!-"X\r G'ُ L"F x lĤuZ2z}&%A& yh,4 @N_M(c[3rfwFzpvƒ܄uSaW$혴R4o3,PZ ˗d1iIǸEl;v=RB)CAT{OT&"Fg}\h=,stgԞ ޸$1F 4 Ol5}r=wCQƴұvC\v@45)tB W* H v6{*x QZO(\CiڶS`B쯖BXo{6-倐=o@8'~jtoˢ{5K;7tAW#oYdLMFs![΁]óO z{³ >Y8O}oyjxvݩĆ1)b1 bd+M PF .r S[7-mFtIŔ$ p]7XةٷEb ʶ>k~D.wZ,t]?A8c;Rs me.h|_. @??8OK;kAH}NI o|\E}n?N\zXwc2q6/v Ř1A *#dqAԢ3줂 N:}JclD-_R71g!FR ORܖ*qAr[wR!/_~͂ɧc#O7վpGӵWa,id<"ESH\J]kA0t la".b1ɐmfQa4O :?'EQ ̜jJSэbN0_YA:TE[UqVE[TE[Tef75Z<ߜt\fL7ܠ$~̴9qv+7 57{ธ-Q@JMlzlF5SauFFjAguȌګ$=YJ{`d9˕!k腌&L~ur}oPlUV9[4ExM>O4{rQYڑRHT9CjA5;1Z__ NxFqgvVFj|fDLKZMJJ)!$!QHfPD'2 آ*ZI/h-/qd3ƴ.ӡtݸt}LRfsFlj: K5YJSc"Э=hR<N`O"`{"`GlT,_$bq0Ͻg\yB>@18lc[.2*+L?UQM?UTSvU,3hwL1777'og J̊&"kЕܵI !(f!`7@A]~JXª;]MY HH}W)m4ɒ@)N70$pDq\ a 5L;5 !El&m(qxe.ثFgUuATDuATDuA,1Pf"Kggz!f*@4D\|c0 (dCfσ `;XWFkS~pJ>w-.#"g4sI$cY&\\DyȖ{|gFmFUVɺOSUVZ%k˓df-eg7VĀ5xlCXi9pk8jc68ZgYT\Xpz=?Hl gڱXL; oki1AJ>Z&8f6J с{c^.d=ruتb*ܤb*%.9bhmKOj Y(5*Y"œI7<v% ė/ay@=t)235O9 ~#8=|{:d9:Fn}9$vE>HQqe&YYEH{hI/v2ˎxR+7mv=z o: NEDXѳ&ZdmY2)RLIj_}n`'h9jU8Ğ򈞠QmndxnS7XZie r &He>g3bi01kbrdpYf6TJ*kTnuCmЭЭe:t'Su1zexfOd`2h"&}C"*؃94k IN%?V )ؙ}m3|:,k&[%O9 "xpB !:'YdȋQ+3GU t{FVZ%kUVɺ<:Pf]M<&7g8&#68o1fB-=0xlQh4+ ਫ਼.j$d?w76墕_`M;^mbI&Rxq7 =ow!?]D"/KbA7;ˍ6y(:/?頥prsso:Ƞ%n8nw9 *e kwfZa12|fΉ9tR4 0#Cc 4qb"/MChEo+mȢBa&ɸZnm`Nv20Zt$J(/ sIIEIM(,qTurV6V~3{aūϿrq®Y7ZJ>O#৔agҭ98gBiL[/rxS. WN|pvN x0|FPrv#uüzנC>1ZF)(YAy5(xH!YT^%ݒxvG~;"|^ij5W4)Br-C;mV?KsLJK~a&ǣ_s[XZ2|(:H$!c%~P,">Cyi/ +ʽVi]L?MC{c>wrW56>ԴUY4$n\K _ JaS)mR~-;Λ56HhIMR')_+S4KǕ[٥q].δ^7MMvI\ 4|{U_zYs56̺7u2kHMv(Ɠ،`rKqYNr{4-? qx'>W7oHWfk9q&3+sculnrgx|K ^g_B`7&ai9Ŷu*V֥6)WnTY)`|,?wR\۷He qhÎȔvRh[(*PƛD\tAxBMss 1r+昰(Z%L|/M;>_'֒ '|e,LZ|R*-z'4(+m=6̎O:.otik}5ojg[e}3Vj =߾zֱ/mټ|^~yI f敮Gy/^_old'WՂ}-[}929]6?g|llߗ5:Ob$.< 2|kT}7~R\\IάE(ޓrN//|šfZ;9}{˳FrԤ}\<] 5<ؐ/ʅ&{y`FFyP`}H{Y17f8&wun;=S-yREҤHJ"w¨,ZR+H!ۨ by{z, oڐ\9*!֑U921va쮐\as6:g_+uV,04oDQT\ ec:`QLI^1/c\ v8B;7?1QH 2%D׀J&\z&z3?K1](u1 hQ}KI,Q\AnÇXV4bju0`tC,Ƒ2 F-uEDczcՊ,S$s1."G>L"ݵd{s`ZoCo)F>ϲ68fp|Fc Z@`OA5.)eѿ l0:I+D3 dP9E>"* CM W݃*0Tm)HW9<-{u`RQά11.('v# [#IoQQPs#Z'V>V%LkǹޮX]C!_FK#/c+ P[G^RZt"^BhmW /INꑍv!۔nmvՒˈ>˂D:]S ?5ɥfO<}L>o`v!C '4љD.uJEAiFEh8=/$(yxꕈg] X {o8썉@3O(񕄔 J#/TaQ7iXCRfcidBZS@IB T/8;T-EkO-2x'fLH=-9UԫZw8)Tn H۸rTԒֈT<$!dbIm'nkݗrpPιA>dth!8XFVopIC4 NSD`N,?Z-$V]-DK=0; 6*nh)=p?ەBVñhh"G낋S.i[8 \{<@jb|;ak5%KBBWZ:`ljD\LUA.HZ`79oW o>JJxqƶ,0VSj(K\dQki#5iF* ߈idH$j-+%+9f_L:bi E:U.Xs] c\oߓm͉j#gkM,P;ߩ.yԀCJ d竩`2Eae>pѺJ%Xa+;ӏP͜]7Vl^t5ɘB)Q?5b \+w@M*#;mLFf p吼 (:pȫY %2u >j$$ffoLTMVi+ qey)XWj,4vg=mò)5ol'P o;̓"RLI'ΌHV5O:Gn)I#H*΢JHsQ>*C Tm_e64qƭħ]dWT:bwͧ嶻VLY5%`FYm#]ShJf%gυf2 @ W*Y).( o>7'Ir!_ \ \K()hΘ"MTpYUې}@"۩'lV+]]C#wyWRƈ ~ Y;%\9noPkJ3Y[k[IC(')~jo=n9ᔷ+t.n[C'ȭ wuADo _&ӻYU6_ و+H^Dw/RZuےbҳl_U!91B0Tku#8 Z ܈ۛSUpzW;2Ĥ.Z k}[pU$\lA޵4kE^eE]T#qdVKk 3fɚ섷+4*6b+Wg$X_?-cAB7ݨlDs$nË/na1un,qhm;,3DN'cnz@_.CA8F}흵91EuGPxQkB-?#˿BnT]aؙ3vs d>FuuX"N}IQ-)A#6VuQʌ{) j(YXNZ5< o K!dD":MGw5sק _:;LJ)3PN_z'uAHqzʂ@F9!0 [ݭQ[Qv*[<4Dr4[[֭ۍ葠$r6||̜L9]`O?-w,9iC;H'gI*˂`hK+* CgtCT[XGCVFeʠd(b4Hiϭ0ȑ?MCJQ$t@^{[gK*eʀxe5{ftc4n]vu{Ӽ3 5o,I< f_A K'x2?quBKgt˒He-w$SsW gIr tkfMj2#eG7P.uהZA˭#8%OՖoyV1v^N-=; g#5* ;ZfM9=nt]yw3 u mֱs)ZksXd~bVU;4Zsq*n45 ֮kŒg (gumR7tVvn_65 'ҿ,+T;r=g5ߚ7lh'nnar;79u> ;/Z*;ϥD$n# G t҅JFE[WrrN =BKO` Ԥ$rP >4*GgXN<F"T1%4K›XFr;D>`{\ vK^ XaXsUޣƒsÓ'zPK!:/-!c45kRvUxY0<@t8bҨÚ5)G0sq\Q\;uH1^hOۄ#cCA֘ztCwwtyZۧ>@cѝSa Fq*2kF*Q)P ryZ|M AJP-ﭧ=KA9E-۬[dr=X#<ź:Np-c܉b֙횼kaϜkŕhs˕Aӌ-Gc ! cEknR&="{m\-pYԔP*OպfQ4@ZeJڱ(㨤]- PvTn̢KѴpuo-n[hbvkɢ ¢ zp)N|@:`12WB!Ė _:N7rKcǸ(v!0uAh@._BCJQGa/grH;WF`$%dN1V0*aOםvpI@ E]ۂ@\d *LJ_sK4ҡHsAKȜYG@Cx@t%hN ֧mW7-Q V-3ء4uJLvme -iPi6:޿se6qo=}@Tw| ~zACy!\u6s6]b1A!O)3*QfZ&+x2\ >'럳pZ,+ťϭQSs^0iLMm A{}tO=^&.68Gs⧏_RuFe414S>q44]E| "CdRGɥZls3Ac޽g40}HyrLmPuL9_d0! AQ*eX J\i]:*%͍i$FyIW8-$,^ rdkTw J*H^뿾V I >_Lvu)(+5YŤdʉ~ut΅0Ɨf j{?B@ZUuQsgmJnݢHE 0vϐkȓ7v_,WSʮ}M%nĭ#7dmPmZa~65Y<ߗK'LI;Lca4"(rC|BP0O @I*$^ w%/}s H8Z-'WdE8f6*hE>V({7qm)2)0fx$JyDoei)!\W`xk +|5²qvMXa uC@ Whyl>̑qzp&x%/c^7~QkvB4UfJ&cȎ7Od {?k4W:nt'I\u1jο 4m]ZF?5e:Z>R8ķ+|sÚ0w$`Z_G_M﮾z芐AtIP Ojr 9FQk-rJA7 )/lo<ܴJ(=)J3:Y~yiN~5a_ܛÊ-a&W8ȕ/t.:[иJ`( {ő-)NwهR]%l>b=¿A#ʦcY&OT/uc2%k0!$#&$[8{I+_دe\C ^ǧ/t7D)mGR6 c{^@ZKs|o,"8-sŷixwh%E᪱j1Jhs#lqLlNF N/G̖&"0[]hW-ɖ)*Eo\8 |j-(NP;;'ib$Z,~@bt^ui%wcBcX>Le5xivSneH8g>qSnܕnRҭ/NwnOE &zTC>q)ސ8o6TIc-SFZv?Xk:'_^EObu3o.ӝZ|gjm^}Fw%gwwJe< xR7cP V|U-`rz*-࢚R9gn)%``e]jmb4 H2Q*b>1u<^Ĕ'BfNbQPZ:}ڍƴ.uh~\ڦt9` ׌ůu ڴݚ4iQM:M->#Er`R EHBioFoS4NJ!na*TbU5y@%EObb3;CBv?݈j $q Aζ^,2oўˡRҠLU Ye1Nˤ- n/=JsMGsR @ w׮O1ޠ+Lz"jHR}*#!4)H}JHҩ>iEIޓ(QCJS'E{ hlN12)1 ~lTWe(cel[2T eSlD]A& Ǐn/ONu\ -[~-cas/KJ^fnVg&7,dn/i y*$60#}pU3Cz-?^^i*T;wgw{n9JتOw`Qm8'pݍ- ]!d4È2jw=vn4;LC;+hI8xԧ(-1'/\Kh԰K!>4/I)02"ڰ HZLF[dq[QT6yUY;>FBGHP(~ߜiJAePxWIAfejcEHJ4YQJM!3&D#!~WgIt+«9qZ G>ڮ{&RJk)%׏6K2bT럐~@[v*Viە&%x0Y56貦Bᦾg0OO1Z#c?oՖg;3-oדlCjvQ@,8+ʟ<>Ӱ>D1sLD1` <˿YwV2"8Gֿyaޤ5ElT`>^\MB~XƄ2;N7b3o0!A%NXN_G,ےMY,E ǖbU =CX P.s])ę{D)SLȇ QM:׶4jk: NY{xqЅVh0ږ >m<Ĭ%׸ZÆlĉ+~BB+٦u* MD4dR6?ޢĭf %<L!X}V{FG78WG4L@ y j8EYI6!F⋖1K׵<ƋW#\~C`ΰĖY !PxlX$]b =cŲ!ռ= Kx,^e#Fl! _ '?ҕmf.aO@% 楿5r lq˧̘?~wMj CwLv$ːI]uFTi lKt\(`}Z?kwZv;dіVs Nmĝ{ukR׎Y mg^M;h;aP\OOmٛtbg5V2pU [ri'h*Tڋ[9ђqS^z<_o~|vVwv1Жxqe2q|߇!ԭ; N-Y}8>c^kRwcF#XGܝdF)zw`>jVXfxKRkJ&VVIB-%L,l`-'}vHגZ 5Cp=ybEk c!BfymǛ%[D/SCI<ߓ#<|!QH̽0f#i+FuA3eBTM]KvܩoQshcQ&yNV-'U;.[cďÎnB!ef5לjJ`T5mTmLSTN¿Mo$4 6V9#$$Q R?$>8>' 4fHA'<h\mmM~M2JPS=Wޝ?ҲܺݡCoOMԲ#r(@$v:+h0'>= 4#1@Z9J$쁜S$g q[䅾h*P f:}<ΓfEhÝOa9p[Wܗ95D\AyR@ b1UX$`'N.;$mm+U }F\S% y]6];C"Đ BEwYSLaLE>5#N1KSVG*I؝⃐ &&/G֪wbKhS(5Nls[NF.m6t2azII:I.ޒ[)r>'s9Hz:BKzvG>NTF%ѬUI㪗cetoNu tt)[0X XQ(4#9w:,6 KMÒoӰlXr6V(B#01DFSP*f+txqhՉ*ŊV|gu0Ȯ@ވBH-gtgkN0V_5$& Vm+!Z2@>vS3ɦ1_/L |oN61,,yZ }i1ɲCd)E#)cQ*KPs} K۰4?Ľt+&1JP-1 IW!l艆?E#BZ1&'6xd#*a9X/'1;#v&pkuA lsAǠcI##Nܩu9."q䂹lU&b1''h˒%tJG'64x$JP)޳R!bCc]! ܒkbCDZ%5 g#AF)Km|<`MSVߓW*DBX!CLffa2n.3j) 8#Z>o҃kp.d2Amg mOY3Aڪ޸SogJʚRFډ ֚]:qHgNv: K}v@1?o =UH)S7KsA@۔O?#H9{d QM0kbuӧkc $Qi%i8 %ggr! &VJlj8:q9 xu5o9fR( mo2q\$ƕ@֛ĜcV3N,-۔Qb?VXyKpM `/ER*YI٤/J l&9I %h"R`\O'FT1k~Ջt8jnN[ azk^ Pdtծ|H>siLQą= P~챘@) XK<3bd\lޥ!c Цb̲F/@a2vَ)HBI$?WՋ7Nve}|:8qk:3dn9CX|6(HaM1"5Z)Z5jUu4v9GJIɹoDl>l!@6mPRe# " B0y><}(b2dZo7{c4{i4^G7DW:rvLj![κF[LykvfMf9#ـY8"AZMgsAb[u7[L@ΰE\ZU28rZm.IZQv&*X|;~!4ŨIwX#MjBl5Mй~'"%ڐΨC^UEH;' FsVp@H Ź~wcښ.}q}M%J02V\qʥi&IfB H%bbCKq6>~x~3Pj2_^ K-o.n.@ 9T##O Hj7 \tG[s7iv!brB1#B&l72zDIJ(z Pa 'Eq%%aK%xe3K! Iy$>  {=|Hr&i"3Vg~ n§LꭿT Vh6ñMb7@ĉ&~zE_1 ߷:ᵠb'j~~@}tnL^8㵚ɝsilZ{|wpϮGz=?0z2?'p\hnW"g:䧁 遼T?qKW~4N%Rhxjcr^͈w>}:tOg$,+? R-,.svLgy7\~E7^_9Gh]:=@BV9W豋ي3[ +t֕>0lOWUUi^]EWx~/ޟC8# k=z3kw4~'.^[ ӛrUFI/8zhJuZ#/u37pUyHg#;q~r.LE%М*ќ|3OxU- &OO_)`nB]NA^Ria7qusZףXgh3PuOefP>vݵf!ڢJ.@e y|uSL^mzO)ՠr4C7oEj\z)s,35*p`B Ja%(8VK"!Pګ˔#ǒË MM=ɟ±I~O7|QJH&&?@ʁsG?|jdu;@9*)WX*{1R'_&rDȇLd'ӭi!yd߄0"R f BS'˦&9l/L/5tO7iG xr7|႗8c fG%{ˑM\(d#N>v\ãeuzFݻmg=~L_*_FC,w`BI R2c({dkc%HZ4AQPq=yڕ1@ӽ(XDϓ+ݫ|V24s !elayuQj.v}dnX^))c}R"BI:OVc$5Ք+ n<ס8Qa%V-Q0vEy\wu`H?fш΁cnfn4)>=O JPB ~<_ (DhvJ0DL!R 2q("VWAΟ.gp6$z+=;!*DmE\X #AX m=)H}*~x:<:v>9M3:)!6=t RW#yfP˴ƾq׏t|,= GFlrBL'`zYaG ^Zў#f095)R`ƴQeiFGw>n5$S!Qb"|1$!5ͨQkUJ!{[ڹ2s}?N޷4"gq97ՎK66.?3 5y2l<|\!?sOQQWpQe$Y !xY[ nsԞۦ~wYK::N|cc弎Pޫl2&QTZf %886f-ޜ5#I{7a]uUȐ'B#Ib@E+I\@)s heA+z'D4 TaNu~eUI q42 $5RMӖXv*FXRy%@͐#wYZq "MhiBHE}B;p rR'wgBJb_ iΠ:фxsJ._o)";{}%`,Z3Pų/o^2r';}!k^toTR ˄eRj*|TJI(H;?g42BaD8؜P./`<^+k7/ƋI &|T~Npp[18h5ouE4⷏y5xy mz'4yn;2K}}}}Qo9}iӖM_unaDN`5EAsWhLa-T#ՐsFakLB=ZQo*K#1`=r #h ٳ-ղV6eTc|Q݋LD=4:Ȥ,uS(婹>^VRrtVR\Cv/2gP:cmU eo}1~ځ8G^Tz:S߮\ogHDIg]gF]!7%ǫ|&~<}e]x42};u{wE7޺)VFkҲz99R4S{ÓbcZ7,&FY݋ mľ5$l޼fBb{ج{ "\; ,l),?'#[O7LCܟiCeC318A7uvbOxi %?ੳ,+j EwDpZLhi^~ Ѣ3n?Tہ"Bf;[}S2}߀xgS=*C|8&}oz+ڱu2A 6> c9( Y si2j+e߽<+G󴁣*b5`(cE5KPU #ʈPxϬE,}е9{d232-d4ß4i87BY>̲К<A;˒e1'׈UƮ$-.,N>&'(TGGE"\? 8 |Ͼ> jf 0ƙ{!jU:ԃB#Pf>'t׽$dԁ# ʲ=2=[}f^#bZu8;anBt<+LkH-QoOf]3D-]t]9fDY{PeH( *$rB4ZK:q+X{ Jg.yM 9śNc,!ZpymnМg.Ye%Wmĉbf;n`zMb70ߢC}5`,AA1R,9j#FC%CsEk=^,VgY &JCk!h6*b(*lU@Z1vO8N9f2 |D  #Fv\ٕ5S$MR\kg~q%)BΜf9䤪w,'Hpq0pi3l4i,<:#N"=K&:L53^e9nLɤp>h</*C 5 H9mߗ [!?/&ʁcLjl(ktMQ6J%I]&h~5bHd/ Il090B' *iFd\D; 2γsQ5.KA㼊pY'g ^RAE  `SADIIbFGQl 8!#Rv4k16o Ca dn$kYtL~& N̓IBҙW98ͷK2ԄdzGxzhXw%GMA|ևdQ6eJLStuW;UlC<U4=wx{MSmTmv%sn3SQkG2:.2 ^"@< VpΓԮ3"J yݐɾ'DÏ#kY0*NK䒳=!RbɞU:#4K'DUl=8eMԐÎx;5A9B隷nb/(;wtFYH\nٳwV6<"i 9y͈hi<֙f#^ {Ό4645ACT9ht7 Zk D;?dn8lϯC>ZӇ N^xƓg'd8c{z>]?yIn <.#:'0l*BO?YBMtPoC[-?9WNV΄=6Wh_?tnxqjewvϿ '×xѮTO⤬t@?DUldiEzG_c[D;Q{LHvFpٮν>S]}1qy?oG49!Nlc6WWkz=u-|sYxN4gqe ٦%5ţ>|GCcaLYͻruq_zq3~Ξj?㧱+ןhY-'x{bFJ+'hmZ6D/3Guց~%4֙Z?B}i"?y`r;b;r_F373,o.c&u.,fÿiw'ثz'ثzj`oW`Ơꠡ.H$@Ap% ̢Wj32 .Z '^^|}ͺP/E׋7 [țz}n=iÏ:g&9-?> -RAߡ)l!#X 8}`9ڍcι8r>2#Bכw:g=ñh3*ܛD4 4M-G<LJEЉb38 N+Ť[ _?8KV5[4oUӼU;k((ʼn,hbQd9A!{5x%6) N>hf&xs3mu$~n[,P2:[~Lyߝ ? >}DxcM( 090ASkm#/^+.Vp;"ٞ=~lI[/_j%ĉZ,&YF' H$k(48%㑨PPEm A-vKFxVz#: жu#&ﴑJJ\d\#rV5(ovTIF "Wn$d%ŗM*/[@*PA(o|1X ' FLuZWF饣atތt5 ]$ HNkuRHgCd&PZo@H]%<hG\1bhD+?n\bD+.`1"#AVje0ˠ4nT- i"t;ѰM"r%t0#v4; ~<*䴪"stox@ bAShg?]v[wq|bڒ ĠbrNjL'._-?ٿM7w{qW!"n2#*J^i+S~3DHCSnS'eѷ]l}`oY+HoQM+BRh'E9rٜp\- Zɟh8tQn9*mpJ@`F@_!G)4F-J .Ox!J>ljX01P.pQx#9 _^=zMQ$|sJn2<#:?n -nEpPGs4zD#B8  /ACP.KTGGY/[l1l{yHOpY mL&0i>ѺǮ[2qժU >C/>v_U_.hQ#Ԛ)b䩾,㔝dY9 Ȯ+2A [sל9.A~?5zwiUKZsׂ9%.>C_'&UF}q-3ve1ǜМg_%RP!x\ <\`7g[o@wq+ŝU6p֢Z"AOZƪN*2+peRo Қȑg)%kɒ|RWRvIVK/O[L(.F4ǖdt|C6 ei)`Bզ3E*R1L%_&/PtRA`1!ALSɃQL9<VS x8'QFpF,lB:Kd'y(#xQz U3!@0$HjQuİ`5]ɝC˫{Rg{.NZUPDUԐ=eW.sjcȄV{bzx1$} .AxZC.CJOܰqDfW ug߼OEmbS{{ڜVYF9~ۄ\SK:όImջSx$x/2Ș2kpEW`cuVO@d-vQ{Day~{TmC+kN/ rOk:h^o%s`P[::dQ{aQRDkO?K~5PMm4PcUI1 "  eG%)&W`HaZP8(uu EٞsȂpq:xޮP"ͳLQ„=mLqo@j=:e6*qJOS+#y`*B FD-SGdIz>?(gEZCoܷmҧnitmTZwӮ_rytϭdP˚ (׵zljֵMk͘ .l&zȮlRaer`CAFZAȓPܲIABP 9-wBIiߧ܄'a7`;UzQ@"=8U iuZww fPid}X-+ O.de@%AD/8cTv"w /XFt8ڨwU^NI'G@CSR"SA"\h)VrTU"+fr=blu$g3_ߜWѯ[Ruճ[0o:36SٗRu\4P9X+"+I@{3F 93\L#>-0mª&O L$5\hu!zpa& <1 kd@"=t m4: ʜ('(̅hW-kSW*>M& !-A|'XoRr*!~Ww@. R8p@ti.Ҩ SLiU]+T|Zzn6^4'#I>Ӣ}"ur{7x]'3iReHDLh>[t&#})1 noڼm==i#Z[Y&*IsS.< !BvvY 9 zpa&,HDJcfAn,~$γ ncVZu47s TBBAIQqE[zN>'X{fBH@?}cPo[Fa%N{JH5wm>ӌ݌ѴV`4&l8*q1Cf֕ 1z_2J@eFxÄ1z ݘF =ƃŞߡ?ZdqJ ܟ4oGXr7R%Ap4ћP%A)Tྦ iP7* d s>f>ľ ?B=iZmh~)i+ c ]l!5B#"|~ RO 'n[ښ|&I"Lx"H€ƛsr}yMFGj &5M=)^JV#u0d$ #Ac>c=\OHdS?7HQJ2A*n[ tw\I)BQd;O(UZs׎99^\ʂ}GP)3Uߣ p_L| H Aˆ0B+Œ [=xYRsoUĐG6O1I%IMo@ # DRg rG2k I!RVZR7Hfѱe1Ć3h2!09Sֹo+=8+ȆAm9jMSX|H:0lS ?,(5Ԙ$/V+W*J)# W N% 4"Cdu'ER+403)E Șcҥ ĒFAiet0<9yS*êK!rx?8'${CŐ R(&q ũ[i}҂tIFx)l51T^9@;Yvi{0l 8]\d%uw7g06&RjȪran>W+ۙW=U-l[]= wM/,$o-(z{g [n|~ɗ7x a HB䗟ߞa<߮t՗7g)-'^o Ss^_6T'oR9E>5(p,Nz|GwTJKEÑwӫԦ:sAzC/Ŕ2ccR5Tkiw@ t HGIQ!su?ͧ?h-!C3%hrP*3mrȥ,zK﯊װm[ iM.ɣXOߵv `Y#܏5𙩨&cORy3!!u8}ZܷS5a-Z#N>$FeOLmV.|g| NՃ s7]mLBϚu?pЍ%s\8Z#kA"IR+\vRNAqkÜ ldL)U-ISEóOqXR7g,*J*M0 H* B '4T[>Ҕ[s oc]ܾ9!fw7wo=_>~|qRL֟SQq7gzHr_)Ŧ7/=ޗ8mM˒Z//teYYyT]3/eg0A2"Ӟ~SO?ImOt?- }ӣ͗G)]Z<2 auշ[ ua)GAUkRDwFf}k5.rX߆ȹzJ0?ݐ(!vν,wwXrmwIgP-*jl/|mI}GN/le)F]a0QLQjrͨ5vJFX3-Fi!=5o1鼐/fe {ňhq1^/qW8 {".&[E|Q:W0WzyBcʏ6aOta}i!S7vyp¾fmQb,B 1a1ƱEyP GxOf.~Z|wG1N3#\x'_^rSoJy"Jisfu25▋E8m˱re7 gr").`Va镘˥%\B'"Iz˧*;ӻc')vZEm~( ) \g)&!U&,8g9sѠDHrf>2`< 6 L%*orԤjQd1qE Rz퐨HW c TMzJ:JΤ$I떋IEђ^hnόJiyiY9d%\v:Uk_J1&ЍcV\ãB!s4BJ\ϥ(<4@U!8Y/TJY4־JŹY7V}'gzņ^7xQ!'?y^FO?qEٞN->*`c_֋g[[=vݺ9K/߽u7δD[ԇ8׳CXլOxl¶Vu;CaLLVOiT9f({ LOJ.n2Y<ޱI7 =LQ??/WD j5]-V<:9kdX'"{`Dncoo'䄕v+=>=9[F}a4_7 7{& 'YЏ C1{T_^rErpD}TNܙJ}Y@æt{]3<%-nD_HQV&"B%BY\5~%ij Urפog7wYlԣ+ `k9yK4yEie҅!-XVF>IEitKI}3Z^B?ådc\B^ayevc֞ں٧زi'ᶝuBlڇ}c&l7 W;γx[\/ ;yjZ\DaIQݤՔnB-8c\qߥtUq8ܖ]f0* ? GEe|4L$Ӳ>Aos!z!JI4[zfƠAmN5+tq| 8=rAq@y'w{EWZ+m W귕s(i,#40hqw" FNn]x5ࠄt݌`/2lc#j72(`\ hB0ņ="uF[Pi($&JɷŷxC`1V6;[|# ) a Qj$ Z[[n}"$u>{ߡn C/?~j̄}\jᅣ9yyѓ|GO1M,[H=c^4Vcut⁂xD7X~8 w˦<'&jv]d6͹݂xc_lOv ?Enm;NƂ8rѵP\=0`XwL|%ƘtLT.:3H.e~pI ˌf&!cbMn72RٶT :yLЭԒw uTeBrkSR01{oAYϐgh44+IR G׼ʣ GY94[\ ۅHD_ZRptN F,[]hhp!Jr~[ϞY-C#Q1@\H>\W+PqSkZ7ߺ2?TKO>n~$GSqN<σ.>ߥ(}ULJ۷\ОLy;jngỌkZ.uz_o.nz[?c0n*_3ҐNE~s7c ?u~bn>QPT9jeQI&7c!Մ Of{@+.]hU[Q*ZJK-шj;aP6 vSl`i K=*Ȥ%I*+* Dg$W-c,ѷ4*d[kD2|:\,V-c|sK+v hZm5G08YRi>ʢfD\)_o+<OCLdN&&p@I'U`p?٦f4Wi0'G(olb@H)[k -o=vЌɑw{[!֎"i4W94ۨșS^7uFE*D%6ܮSl\i3kTO8ZVZUJJVB;!ݰWl, )'J+",iu9pƷsu!3r藪DI8S9ceT,gU+fk6)f̯P '>inAE.eկBET*cfq͍:onu7H`e3zN!v-pP)]ꦕ9c^`,`· 6J6glowxhصYj+ q[Gʊ*PF*CQg{]sQ0Z7Hz/?:dA12q#SmʟRSq7oB|q}x7e~M@û-s:ZbIFɧ}8Ž0Ώe4Cd/V3ʶO ;uw |k2w )c$5VJ}8S4>3 >Ոݼ8SĀS]>mFmBE!+dEWb6T A8)UfY7' Fxцf @>:\t-e@p?ۑCBLݷGxUE [kcj 2|y-g'*B[{wop2Kۨ޽I|uQy]g^E̫9t)'s[, gl#Y"XݒIw^;4u/ѻHh)9r%\=(Qi%X*bS-QPH¸${6$ %#1%΢KL A'rfPAJ9c)F}:ŖT]kSS=g"u) 90]\F2"҇D8E;vڮ싺T޶HHQ\}^@j$\Eˬ Њ0 LR4y L>+P:r$ HLǚtqM4cm"j%gu%..HN!DϝԠ%fhܤ5 e 8 m ɖ`˹Idr:59H[GbLz=lmF&fb2Z[ּ\. q*"կL(3b|]j^=ȧ`3MiŸ׭K\Y?oJ[=cSD\pZ߾jJI~Eԕ JHkEU#.^Zh];|/oه;Ҍ~E&"Ickd/![L(r\QxR.瞬g.uhA Vk-f$|Sڒmi8iel%">˒y!"5 3Vd&[3Ԟ ZmAv@3qvq14FQU T屨J9b_o$eb4c"$묉)9CpbZ OA DѢ A4޶xAc\ln SVݵF0.d=wqX e/򡈳E.4M˻Dgܙd}ɹYK3Db+-أsY1eE%M' 49f"jsd83/0f/$0ȩi_c*S!Z07pQB&0(e@&D5b渘8qBjx<Z$q3rhw~EDl'J2UK׋h<{xA_ygpwTA+#wrz=rͯ_tG غ *Vzy[l*L;7x=P21'.g P9dj~A1@cp`@MWJx[[#y hŔV PRBz66ǭ{;_;x& Rwz>'J@SZ3i}Uqv;z 9A {[U!Dc8[uBr:V>ݢܼݷ[åСlVRܢl-<:b3 1˟\[PobP4nWwDM0psyTox#szM@ ^~IG%1䒊oAj ʖ[vvzsҞU8+[['X-L$Cl6 B+(n1GhNZc&i&I&!3?Xji҄S%Z IbFX&xY :?jmļprȆozRm Ċ磫Z@9^|\6tO3,ɦk9IE*! :DQx3$ X br+Qy^4ʘ 2~hNZ6dZ탓t}~k47gūz}+sˠWw %;!4#H"ie6Bs fV4S]CCmf4g)()PK乔,!'[&*sx; Q$Uz#l v{s<v 05nuUL-]3Z3vQtЩ`zM yn>m= jXyʭQHMF8`} p&ZwP:DLwYkm|h{pw(ĴZhe l)tzbZ߫qT)iݎB{MDaPayu2aPQCVg3[_aQ=F<ݺS,FR΁lϤAK:^64uV @\]:Uq qyJP2ޡbm Q5?HIU[N@p889.W+A!CHj߃.h @6pk τ[׷L(!``|ב2›?AcSӠsR4L_:#t}ݸw;"= ;ɿ纃`qtAZT1IJJ1 ZeXiLT)4'X+ds)滢b5q`quNϐ }DZצYjadٔhyI u>=I6X<zt8Gft;bX ln|Fܚo|$.n]l;($'ߞܼ4˹b%?\?%۳q5#] #Ė R!&-?f-۱2gu=GANZq6n-l$5nP5iEːcM_ϙ9DH?56Pݦni jp6rqܾǠJd *f. vN [~6rn?9$`efG%qغ֧^jѬ.jH% pah_s[hl[Kg-Eqzٳn-[ "fbs$G['@c,nK!~Q_NT-<i/o- CF7GJ0PlAQؿSw0pm{GTbd D(:l(Mq8P6VH2JC)Lhʬ! {]޴wMF]t~6j'1jcT2pj>_-WQFO1ݡ|<_~,|ˍ9@Hq_̚.C&"&^Lcy -on-]X:xGmr9ݘ'*+.ٍRK^3rgzDuԕ8I[*fT~Qv)]0w@[f!Qu"urxЯljGnGrR<8--rPO*yJs5.;.;P[YMnR@Q.E6:}t7f"JG#)B =#Ջi/+߯6\%z=uC0,cx&4B:DXGݹJqsti|~5LHc€4Ɠy+ppF=hV^:NԔca[2ufBNȋc.*d<98(amΌnS bXL-#6jYbD2*")g* 5?v봔9\Z.$!Ĺ@H[~n"}Rya"[T3VMEl( gQu`@lZ3 ,~IU")6Gv:Rf!#;=)zO[;%6I"Ԓ5Zʏ.0 SfS ['mq\:o zԎHp23Z 77Γ a4Oϰ,DѾ\iMEK6р'ێ:  fH$7<j|d2f\eCyEB0">n+gMj<J%~H"K}+Qz+ l.KlJY7y[İ2D*$z'juKDN ) 1G!%S0.kDۘ|O(6a:Ȳ hKq#%3P?m9֥Bk  lr:" %,hDȤT4;GHBB<*N^BwDg 0Ef+I*y)Ni߂īo0R2$zz=1-")XcKz!A0ਭPq9CIg/a03зXb7K7_PثqR?Z@B-~AzoOMxs r}֧%N,ҾlrhwG6r ~a&K9Hq\h$e\3L'2|v l OV8Ѷp^?4$y'@L0 hN(E_J:OAϱvX_j %+zDWw?㍇< ^l$6b8؍R#$: !R{1' QHH G)Lc2l6 YL)Ρi\fLTI;zO'-q=܄g_ܮFt~Zt]͂k ̆T z"jK3t+O9L׹Z/+H()$@AoXR2o7T4?w^Af8D 55 G?- 1y…ϯd44Rz0^_hho"M (ee%ab\03 [z-st> QqHHDIJ~zT,=Yo(ܕK,a'W>.M(ADR#A'1C>Ͻ}ǎ"ǎ[ q?DcfΒfᥴ%Ѻ:B@^)͢+BxABK-pHi+%B1i4<}k#2f3:j!̰DjȐ(ĉKAQ0B#CQgA80HFo.E.Ԕc3B*E_uJ?¿hz@|g c Por(u"X$/@),/ӤX>cu9 ֨zLZJ$ +hpf#c8xauA9͕GS;XATnҏB+RS#3Z $WN:1ƹG7,eBz#vNīj ɓ~ &}E|y=MHJgU'Y @U 23I9,pF? |9` sJb&0T4NB!1bX`l%4HBeL_?׃5ɻMgtv91gUltŤ^O>rBάDfF_VwNVW:/d6~y!\RZsIy.WFHʼ;I%?$+yO~vm>:koُgv++++ϯ9m~7i Yy ])C00"BHabӉ 9(BĦ$"f2$ FyL%B1T'ZdAb|q @$:ͫ,&:Qm+EXCv:'3y9x@tȒ"JIL ezYEdo `P[ GX3@T% ̱8No3(C),r {1J>ZIŘL(`LD(r+#"+#9I9l*:> lF9+YHKFLQj-FF ( RPX(a\@]&i 1h^@D/S9dAs~`pOvLGA|s ][O!k\qq60ni1uTÛ@#zyI@Wq>v鵃]NyH|j?i{{~88j遼xBo&?ܨw3^O( D[/8[gu8'ŬEXe+Zkzp9Z70"8%"(c 2UJ.R%8J<1㌊x$`İXQ !(c0!ՈҴȊQ H [p\ v  G(b[3p96\+BC}Тy5;'PT4ֶ270rGƝ-׵ou :U^*}k/XP7}7m1,qs W w5AUDrMJ%WIȫLBㅜ w+~Ό 6G5"r1 mf wZ? tl#z՟^E ~L?{pTbVgŤyCkfđ1iNleJ3m<G̼,y3/{)968m'v~:hYWPz8h 6&|[cgp< (8 N23;TrFbo8||)kPU>, ~"~vsۇ8+/whȎeqp.JrJ!:*)KJp"5UXsUtW_(1,wٽAG~-gJ-N|_$;X#风2ܯ;D D(MF0qJI 8&'L"kX8֕[a,t?ix ɲ~'d4,N,,7|36ч!-s߅G\9;~6n?OwQٷW:RNH8;r٩b%zuZ6 d@ݪsSPXA`U7 H\5 q\Tw4 4X&d}Ŗ>WN>Ƣ\7Aq_2bl곏^L52^v}9W;Erui]Av-L˳Y%}nX mVYvdj [`dY9̊ňi{v FRDGN* RXxhߖBRg%NʛvW g#z o ]a=ECwT:ႍldl\IVrg*B`Ŏs%+k2XBm%kGXQCsG }ޭpx>M<4NHj(H`a% (4;h륪=󆍂Q j3|bR<\\E4cP~|A6u'?(mA6EK!u||iy0Zt_lwi‰I  ^DA1޶co!,{I֯Bߋ5xĠx4,70ox31}Y YW,V[v/D."W8UY^ |lƔk #hR:H'{B:X0tKM-=>Jjh=O8۰COj=VGIF:SCx'΀ԇ?>Λm%ukQ{AIw!=ԈF]ڇFWvO6B0g3_PIR8a>&>Nڒ(I ZPVS;‚u_8Mnf<+&ޅDHIfBHMk*ڼ bH ճU8AVĽrrW֣ '!f_ 8+ΫN,|8%[h$vfSuk5Lom~|tRҘT]> "b%'0cZV9s!`瘢`rb]Ƥc da!&<!|ܠ)n<9;q"_-y&F 8!\3O fNJu+nDuw.&f؟ܙ\`T̅6} ])haɦhO$uHUč]6@Oڻ~?aK *Į7CL+% A{gHnSBӑ& &Q#ASRgOzEҒwg,)&9dTL0.b" MK I֠HZ͢؟3]bdͽy9YgFȰ 2B԰%#$ TJIYxv)S@ TvEX|*,zQx[zOOӵrEPix!n<G\we ۖuY pt3g[ds_'u˝՟|4Im(1&iώA'Ocx޾}m@(lcFR%whnv͟H|÷ܱ\>4GWSOχ@nd>_eVwƦ?8eƢH-eoG`t;}5O?7}b¿y}]\}Wv^? i^kӏͷ/~}ū/otBBnv:,L3O/I}{o7Ì| .qxkϋA7Oi^o_u\T4ϊ#s}Ξ:v Y .!\kVAׇ\dQ|%রr*َIیGq[i1!)-HĀp`B]8O CsƂu"$)vΙpj-vix ]lFRd(vt'"6&mAĢ tZ4IQ©!BcČ#2E)2 &#C"1ZJKxO4rM!GT MHI Hpx_Tb"X~^$Y (X3/O{>TRǷbǘ.gi4mr9CPK"7CN 9,j=|a"([?˛u0 WXo9|>.:B+p_;p\Ս)!*wz&Y?k\X AcV9X)ÚO53MGQiE90ƑAm$3Xw!) 3ȐjlvL%"&Yk. 09.@-D*b" ~| W=\<_Exl5*~}UGt{|^ !uj%ęXIČElLb-G!ħmRo퉛=&5۪=LKָ."EH5=O;%x=/ }&=[TV` :^ӪzMu 2r)%/'12W"3pkWPv؟ vcfS50>'DąP!)A8 V39(.!׀ێ&YN Eڬ%5c;5As*J4 U&1gH~aػ޸q,W b&fzeDR$\"Kй< ph ˻Aj_+iC3%1{ 5ù!9!4pSYCB7 "*XJDn{1"(-[vi?= ore;[vS*~JEz7[v%Fln*Q_ ;𞩲*~=*DOل.9]P}׺Z|' ćMkF2uT 8]!PRM`u{ 5O1l!DI 10 `4Xк2d.sUD 4 3J])dÀ!uXO=}\TtCNI`RH׎@KMj7}~[w2JJ.!}uoRS9ȃpQ-IWq;8x x|V) Q̀,zݜ4Uw]FoDŲ2/!sFseL'UhX 7vHGz^.5J C"%5@ %dПߔދoL:WM#iRQ݀?[եy] Z4URK:rK~\~j;9nt/ǪMQmAO~_l*X発8HDƌDX:DB&p [ zM 1)p8 spz/pb2Jnu"f-M= K[<:/2 ̊\D\[65Y F5%F%u}=ﭭv(Cډ}:NAf޼mz;u1G<̮pܣ_7cnU>9n=0ӆm21VRv9w·ߗךh$jwD~>w;gȈn]Q. _z98nē Ru4zl>Y'aɬuQ dOa;q!) O %ZA'_IJ=gwN`ͦM;ܔQݟM%r Q +J{BTbDo:ΈD4_j&Hso55(C;Դ_`Q2jxgjktpιmBi e.>oetv#P!jܒ;@l^e0MCqu*cFi*PXzH rJ6\-\E2#:ˍݽ ^7vZ$duQg;y1L q[~.;4%HHГ"!"fݓD˽(#!*yz 욲zӵ{`ӶW=ȟIp^~A-WǓv4޴#S:{AR%J~J)4 :Z{Z5y >#?3 %wL| ĩU8 Cx!zI q"40ik";Sr=ƔjE^>2r{9"k"rQpF8,2k,d \mepuAMa* უHH'EX͐Z1C (n KD)FXIPiO\ PΨh2'(Iz4.`p b4i-%*rºyf+ݒ* rZZIs`tFD/Ju9\xx.-3`хCoٷOlY7mVwWǷ{iw\ Q{}#3w?NπH PCeY𚽻" cx_ZB{̌SPXXs5Ycc?鐾ƙUVQ7Wy ;ܰSۄoӾ߆/Z1-C2V1OcJOMװEqRa[>]vd/[{+(4>e`f<\^x|cĐKfHD?YC6{{u|q ޭ>?1'<۞{.}ɯW痋_ O'˥=@dLr^'cz=F`}A%6Ԍ8Dn>;"o@d$' "~zz&!zgQy՟}??[ 6[=kc'=lNt7Oѯn"^<rњl6;|3 vxCCCC]}c4P@[$6ZT* (τIjsAH-Lirdϸ gp臂fp0,6kn3l~JF(l[e7Gy`0p^^\;~͚5 @ڙn@yA3[#|Т Dpkb%D W^2L%LʉPD5[R]ŘbQ>RYJfuQ24F#KҖ-0D Εu;@PM ;A%Q5n+1cdu\_^3u_ehSnk Г#+]/g]XBi!xD ~kXݖv@>|jWcoX7)-|W͚(o;tk \:W׏:J~HpSϿ޺ qy>> hE="w1Ka?bjGb:v^ i4 E2ÉpQ2h-ppMkgKZm bISx?\pE9H[FwR,{i|ӽi4M^f鍡_3iMzkzvEͮ*^zT:Tx^|pDni%s]4"Sa{_~# $Hh|_(!E^;YI+rFA;NPIQPwpAz(& Qͅ@ҿY.2tJ !ӌj^:7z҉A٘'! {zLHkˋ8w=u!?~P$e#A$㺝 wE" v#ATK}Cc/ N݈GO ^D[- t̡Kc>FD) o[^c׶5ýXi& !u[dJN@/[6@ vmsy7LBE'`[v@0S!CP*R^u7w]x!>2Ah}HU=|v/ [D&P)^7Z_2AB ߷'uw+d"]`j47g;N|4щcErYa$ZCQޖ,,lPjϙwjϙ=gD-$+r[đIN)P}~YM2ΠЊdB}q($ z 𑀔*m`[(xU ,CP4>\KIIgGQ(cdf n0 {k=䠩t%HZi39غI3!JYk R>V\de!lY!̠ *(ks8Y]>_k]ɒ] XDy;u2$@%/^vyf"DИ낐 r d$}AG8nR*L?a]]Uo% S:.v^LՌŌxe$)\bb`>c,>~w8lkQ1[]͐WbH.:>?^-o]T :NbSTIV>)N)vu*:Gts3]^QG%n`Lh G9ZمwJd#UqT͚/5Rў\93%=iE I)jt%%\Cb٥>W²S\\Ns AnQ™4%5L-DC͐jmj|.^$qoy%vdv䚩RO ׌YQ;Ab3Ą52cIn(p#i[p3ZioD(/eע|T2&>{Z E}u,QX9;hr: r R!VFZ'E 9Zs2VEnD͙=l(,H:օ4eFk#J:k@շlrp;{5g4y3 8xӬ8t-I'Jz["a8 KߤpL;y gO qՃx?ers;ȴgzBGOjMnyG> =Tdߟ n M?OD }ʫɟcMԿs]ΛB2ڈJSgܲSA6'_8O8.e[k|{%F=;2Xb wŘZG{![;wUH:,=)NSnt&.xyV<j2aB(Q|bdƺʈ* Ī@NJeG^B&L({qM-pdtGCڄ {`uf֡G&7qnȡ;’}ṓMo>6Ԍubc_4l/>[0oaES|Eې|{):ts`ro.\Hj\iV^j_iuBis"5q7G^4֒Ug-*%lu*IkȄq@e@U)!8Sg379 IL4p"$o EYG fM8 #$J_76錽d+3Gè5#JoAi2WesEM~VG1Ө9b4ֱ,@VV6>^q|Ef"FZ]!@BNZB3[%,ײx!.4Me}Pa"Hr7jQZk/]lztTr9d+UCrbw6q#K1GA@cJrF-r(3c!(w"F&y˜{Eb$zfsmL ML\ru35FCa^XѕkrGr@UwG#W5#dEvmІVU{d;y.'r?P믯 #kapheP~nIaʡft?WWt"t%`z?=]9QΚqSH.D&VdRfɟwF/RlA^Ū~21M{E\]lj-ZC: ӯͦӆр>4$YaȖ܌<0}2n}f!/AD^Eߘp[~3S:6X}R rXʇ>m~a}*aƝ]bt5Abs, I&E\A$6Z:(GǓgc6 CvV'o5.Ʀib'Lh'"x$"i'p3$iYb3:C0:њ)cd%%/Rz7sx 72RXg\hwToqW]ifT<+b0t: U ~ c< ,M:ۀ:0eS^J[?=Ƅ,os,ͽ' Wキ<mڬܚ}wͰ=Xm [^ ]_(h—k)4j+/I뗯Vul[=޳{2{?җvX}i^9:,e6YLW YZ*E'S2<`}^ c .-Н;:*r["UnЩ9#Qρa lWcj]M>1%tp=8VrygBYi5vimӞg^i1cw2z]7;0r[}t^2c. `7-ޏzf?MVmZ^x&ú7,*]RB;ېjLC w;RBh/'[fp:j,`C_~t jsta"giŷjBV밴bVҊuXoZ +3cJd@r$1`S Qgng]UVH߼5(w ) 9oLP^ ȑĄW^$BoI5Jg,\k`g1>i@Qr6Y/@!RJұDf1HEhm@(L~*~\Y֙-cjֺp@[:w orۦӵRd㹎U)ݭJnUJwznAO v*dC)2Ȩ xH[XR؄̤tsS%%R C&"G[~9|`IP4ww'wg'q72f9~MJ֑ $YH(Izkd) cdc{|J"W8rrEi^-fnCdu:f'UivRf'UivR՛4٣mۃ٣^(mwsۅ_uLiNb;.[*>`::8h % #)k6xH*eω>sAei.EKsCtȨJtȨ2 =%Z(DNCrW"F`ȭ]ƀ\ @ǎIf)Efk }śjF_-v^tqV['+ pjklQ+ǂKf㒛 Nn,3 1k3ȘKlH2CF$O4C啲>wNff|;@U;ضPws RTlm+5MyjN7u1WksbQ![F&lb"cƇ@n|vJj)HM4Rxa.܎n5-\_oƿ4nĔk1=hՈ9/#g}F|Yغ\Νs˫H7߫8mƌ-VAdž꺪1L,S$&ޥ*ɕh#ƍN}u.F(ox RZi_n{-& rϓ%WGCGC| Y,Sn&q( c٦e@[fd_z4GY/ȕ`^:F.Dmڎ{säqGcU>(D-~hH7G4XHmfQ( Q Qfi"Ӝ&z*BhxZL7]9P伍„dy'ܞkzCoT5>fKQ} 710x\b& cyvTPhã?~_t@ q f[[bX&6y_V-!t"<%S1zn:iGYHg\N $[PIOBKrxY4'x$' uTWWJRn\2V[\ -_4|nK%$z|%h {=j ynozM*&uԐ+t2SS2.:$sFQuݻw$&*YfT RBx UtGK,#`NiTg O3Y" & faDd3+=- ZRIō%]#^R^ $&f% (S̅% T2j1^p\rSkI"nEX9t4jNj fԜ&߁U>ǻ6D t^LiGM)<&]𾤏57EL~uI.T䞼;) fiׂV9:#ZBP5tcl2ۜ.=[ˋq^ `'C:?jŰ4a-274g^eArks~E^F0Y`!;RE|&mR`بJ ]n"L~ւWΖJmA20jdqdMor}_$!;I2̲ܣFfB>ys`0+vd&U,a,4rhmJXuBl0=a\*- t)4I,D" ugdCϮoۀy˷Ӭ bs ^ѐ+j,tW:<^Fq(hZ,@KY j8.p$#bT7Yy*)9ogh]XgtLX{<(di[tk0r8`,ȡɆ{R,dУ602ђF%:)*/In]R̰`%3,1a=S?{۸,[drp&O ^3eǒ';`VKLIT&[WF&C[ wQ 4d rHsmZie"Ud3(@3~_E)XwIRH'4!@O7|tM< ErsFO۫|o 4?v׫7&ͳ≩?]Bfb8ϿY1|1^}X,XVWtdX*҇>1W%&YFR&'*+bt[Ċ3W}&]s@y.WE,Io{mAI3Zhit_,a׳&# HH- J%9yC:ZBfBeSRBGER\0u߾QZϛ[_&R(Yaivx±O3v|Y-K5Q0;/LɊr+i%  @sU%+䌃 l, lh@ i#=ZJu:SYɐ5*kI/0l'^Bt&1Z_IG#QdK+3\TV]#ٶ҆$LsYtXMIkTI/LZ&ҕ1w_%iIhv&\YBfY#GT՚Yp2UBLDL)}.i^[RaaˣY}QvE40m[d' Y҂TYw]PNaxcvADRFLJI1ZC,xa+RCU/6!kxvZr2:>/D A _ֳ_# =Xż`@P`_E6ymA-Ҽ~Ct̲->)G_z@b欁2*yP_w w,&d*&:V,{I_ 6teRBغLAa2IuGKD)ƴEJF&hoemj"hDbY9//%yH׭HӊTZLⴡ\ gL~kRS|+tmd(3+E5BO܌ ɿl|(dA 2ik+mHOd'G1YsԳA =՘CmWDO&S:m@AK?}"k%͗lКi n`6`#!ALLV4aP&.&ׄ ϡ{S] AN,uN&Ӌ@M7R :%=k{/Ιk=@RFOi+)) $U(0_JNe܁L3ql Ph k}?u \TBNrms3σB9p_HOJSbjk^{WbJ043ϗ#lF7SK1{{"g3!P(ҌβD%_B>%$2#[#~KN˱$JPSy^I4ԩKTnݳb1bwjg֗嬯įg Iy/ TGnR33M~hݧ|?]˟'ޅri]ߌ@6:y{?Ç a]!CN[g{1yy.yg>[%XՂM?_%9o r]jyatr۷_:dAZw jAZ9{+JEg+c#+_?d\? }HtYV8-Y-Kezy%ȳA̙-}5㑉FN%Vo?&84w%|Ɓ%a3}:&`чۧ7_p5$Od9󎰠kY\h*CmDjYVt &c;רQ)˃}*ZfV=;+{_}y|'x_.^13ro̠C和%rat(ݜ\|]=;&s_#@ I#cu*nóiRݡ" Z6$g7\T{PV<ڻSACC|X,̓Ɛ!R jTWE[v=wA~pK;r֏8j]_+<:|}zx =mhઇӽ3pTJUOY#UHrYAC+,ݫͰXBnOv[mBR&Փ컾%Nb9Ƥ,B((* bBLKxLMDSLZ/58ZQT 2HJIiSJ(*zL9yΐlm&1E[NߘF!k`f0c.4[@RuѝHmJ-8Qx~wj@k׃L :ԄL.5d2??9BN؆lB^n*7R< 8! = Ax|cА=\{›! ,(6c zx``\Z|ТϹ+3 dH dFڹ艿P'=CQ)};8<"q #? N/dGm#r}nS猭My„K9٩1AydD,KLk? kԠ;p5N^iG@n \蚘.l߾{w]f^ԟh;َ;ai\]2M>.'2NP4OnG!M/#bGR%Ǣw On>Jf;7eE +=SK12T0t95|fz3b-5P> }н>0O~ykq{ G~+ĐO8;S$Qɨٺw<0"9teTX;D-AD2W\yabу5AŝURVsRS@XE{ ;aX} Oe_%EQrLڿa0 iS.[F5ψOsbOՌ}CN]]?oseDZ1Fˍ)|r\2^X" rU,AEq- C@_Mp?ү|;_.&\Y,t,&Փl'yZdEm]J/!o^t҉ (TlhPYu?5KB~g9 O?2w(xg 1;aH7Ί) ) ;:΀Vh+q;m*l<ΧacܪٽO=+5i`;cGg󉀇-c-eƦXW%=N51F,ɡv$>-1$ hkP}J̒g #t19 b^׃P!iNtZϺn#t5xp iN8-3 ++mF.gk)w=?ã[Am7~jguKAo,\퇦ĞI/I`{^}VnOԾy/yss+W5^f 76ڨURAv?fl!`ʊ_[RgՙuaГv P]Y@4dmOenq w^a&P_%ǓK>W[Cj6Sn{k5UKdf7%$'çşVކ-Kx}{^>}똯Cpv\֏yCVp7Q֋_B[Iy4lȗ=DUCqBy3r_tL.U3Ͼ]m#Ya^rv5ŀo8#NG{f4$qOh(4$EJđ(fu}SɐcK31; %|GW}jKᛛ+|2X1u` g$+.Y B:H )b}9#h7O7W3of>*/oc" lCɽni_r{3۠4:Y܂}GɋZ^(x>b*巃p`իd}cy_ E)*%'Vg~dH/kU0ELY~x""+Lh&|pXlDUs~6Vn5}XL1|*tjֈ[+X5Dzeb$sǭL&XFjJ rHLp{XsSAYh,#1i3BldL3$c/?&GLP4%ߕeze2*= H#0߿#n⥜1="< KFG  _x{5 |Z/ k epNA'ϿݿƜ^k3;j-Fwfd:_AC1wGFZIEH~/SW> 4yݹf)%W_~}'xnJa'~B3‘BWj[;}+=ڍn&հC3q! eRAkX(-VQÅIy$(r+m 9]`V9/-%ivT(CB(hS F:!,#8?k41ǣPsS߽|֌<#F㻟"5dgsu a xZifYf!*G$ʄz%|U"Ϟ+MppUOXbO<^Ld -vMh}[~^ʞcҰ YsԖ l k2ލu q)UNƶL֧@4*&,Az^Kh|JbiP3䩘dV&T1c"L6YPEs2h; K<@Bb3R"EP ]|k:r4{ Lڂ0!ܠRw75;xRٝ7+uz25D?ۿ딢!C_"JGFCbA_<;^i}]C[BTY>ãf 3rGD3oGNzdZ~gW?v~2wK2E=TՓgTͥFCZ,hYBrҵ X*uY/!uQiW\5I;`RTzROڷ0ž:ߑYE}Η=zK]+bY{2YG?'䡟49E:)& IUb[ŤmP`q+;XG@,ET7K;'ߢ '|þoט^sF|%KCMF R|!9l(!S,B"csK Zp5g윪%kSA#:+4==$O?|ъg7$k!LT3 x|e!#yeepW5αZJ(%B%9t9lO b0gaQMk3#N4*OyMZ+t΅ !f BA,0EȒLZc63-A5NZ ^aN<(3#dћ ´S=9> 4',N1eZSg35R;M 7 {QZFK!8a*xaL8W12|y#s o* PCĔredžbmߵ͇3baMЇ kK HϖӲK (+2\.+;rAC s#CMa՘JEWJⶊW%Q+9x~tKwʹE .Pu䩴h +;>.+]bTdH*dQ{}*{zpUY 4I0tPj#BDAͰVX`GeMTpɱMn95+hj6zh{NW)&4 NA` F:k{}'8yOM2h}히h-'uf3Y72Ya {c1&W,TA#qF :YBŃjUMd`UFV(d) 0"cb&Hy1r{ 0xfD 4~mUa?OFC%sD)xs h/=\`*}kD:ZV:j\`z%YӃDւ3)3fJN"d064/ye+ O`Iq$f!**[ 8X!FzE[<.!Z~G"P/E45]yy[XsS1G!:H NJzcy&(n> L)k-GZ6%* ޺r`LjQw%/9>#{ K8oB5W/) ! ǚjS7A-':^,)k޺-xOT!_nnݶLTv=㽙OfaZ'}WR `k*'r?kyN?xɧ}A\u(I!)T-ISdTҀv,Z9Lߣb^ kB *hWRzP'M'OkC4I$s8kT!٠EO=7+t sU/<#0Qy^`T:s>ܠ.i^kq3H3l@ݤBH}F)y>݃A=_ '~k>ׯ^atMFטZh*W>v.T9vaw^ν;hJвEF۵)ZKꫪDF#ez-dED*}kTU'z@kiHR}e$@qTqz0)ф;(hj N?w`tD00[WN ?"+CL*}_No[~MXbfKۖ JZnb4΅ lN8d)dcX{nTVXrBHO~)`=⿊;{{cGg0I*δR-%/}FeN.I lFC'e9C.1Lyg H_o.*"E$;` Ff9/;KJX .H_Ym"`yqJM|1Y NJheZӓ}qYu_{rAU2v@m[ kQ, ]e'xVZDu(^#Ĵ-6&vBlLCUӘC>&#e3m`E8$ cm83Ҍ@-wLps"dh1He["bIhLuÄ$G(f=VHc؁Si,d"QݍDCFu֗YYUyfJ5M5s*8@b:PDT0蠴+-嚇&b4#Qr{0*k)%OG2uْ)F?f<.gaw&@ X sd!RNK}pǘ0II,TYe2:[^;16AKFjAH _\+#aI(cHX#8$4rɐ 桥dS XFE)V+ aʷ/^\NY?Tfa<,KE7.iǚ>|~<)%>Ssswy"E`3 ~˳AO x{}}5pyb߹6wZUsh' ڛ0\,%Ƒg}8nHX:ZuaE'gR09E`c yA)Ol:7vRLE1H_VDKg L~-+o9g¹b4~?;pFӗ# @1~v7ܞ:dr_(Xc^u9f-bf[kH蓤?J`z(<UBs nm7.^/JTE{2J;*=ILbϨR`rq~6.1:i94> ױ:R1LT ʴKH-D)7-DMK_1@ .-<{Xl@aGaBn9[.Fֻܱf-#8Uz#V:c35cHA\2t3`NښY 3`#5]I. Ȗ?r|J呬D`lҘr*-GںvNFSkkpT!?msuAU<[)&=La&ra?j8dc3m3#AN-?]O۲g9ȋltEF^QYu5D՚({4 @pPx7NVu7&"S#CWO Ъ+q Uw 1^#Q'TZrGMA gHO.F\X]tTOV'ۯ yB_hjL2Vjq AJH땶z"u`Fq,%fF,r,b21GH#JXpAkiOEQ!DRRMjP ,}BPAYO@7L5ᒷY""+9 b{dJd25"+-ј9l\\n's!莨乄!V7Ìzv9Mb}<ΕR DLA5"|a !Sc S?HspZ"Djg3O%RUaW"Us)ǭ]RsBj&R {E0%8d3TlAJVL`w*u?E-Gje%GhMrQ xI aC`=88s,0)eE8k1]L qhӂ,h|BC,a]NOčӝr; %nh*W5݋E~fĢxrJx\fĸnOZdAWLۜ=LB'$dG5ƚQK@3rX$jRxQSf%Chxvփ@S 8φ,z3)rB[Gg2҂w:FLp!:<0&Ef'PM0Uou ;Z`g/é&SeA:1 fxnWRz2IУtw321L y|?5)X Xf,I)3EM'}`,(d-iƎto&o-7^ޏSތsgYq1Q~ E)ZY9OjHBq͒))zݴ nTwnG<ӂ[Eք|"Z]*STbZ1+d0@Mා?҄+uv12 a|{7KPOT? &/arbɉ&'fXvb#,/wJ0`7|޼ݟ^ eh_ܤ`5y?_rĈ˞Ѳ0cvX1[3 s !xP~2udDJ v(2/{"F]O%1%7bDrR[7L))V<$"8IOnnGE. 5 bΤR x%p0 $y`iLL}Ӓk*wF RCs鶧n^ms ;Km WzYuQOx0]]$xAeZ^&L%lT8 ԅfݡLLPՄJlbJ@aL9@%n*ZC6lZ˜¯ɽw~T .].}Ӓcմ?(ʰpB!GQq||TD8 ccNaaZ%:| Q`Ж}MKVE2Nʍ^f0hKJL'%`ߴdH\;[뺉V[~5!}OǷߢ??nKϽMf!~>![.[;Z*0>?m/.5œ7$ଔϦxbpx{?ksDʽVAh膡} /Q^Z:\5;?_Mnǩ{t2}D*۔ggi0LgUl_<H|3?E&/{.0J͛!=KL:tp5Z+"}ꆩO0>ur ̖t#Nj%fHCFLH?H8!mq$Kpyb-2TJs-0NG0lj6Gw]F/c8N˂IT0i & Sa`RyނH(j=BfJϣgARyF"w! OQ q: p;jR" "Dmy#oъ-=RE[S3g#o??eQX2Yq?RIIIevQ |aaLu'bwL9veXL4R 뭏3RoLeJ7w]WVWk_lAM3|" 6aLqpE2: ?SgVZZh*PdaR! j@#0+,&rj|o} T.Nwˊ3UX<"ҡA7{ZB3Q1.+|Uvc^cbO֏/qn`}oosJHxG!Wm!?𪭢J}?1BwApqArU5GcрU;Hhd FJwrqΙFڇ`XjsN5gʭgdţT`YrlQ)r/ݵq!/\)TID3a=o8jw 5XP_,|RKGQİhk;rNJ߇ν+U0krQ#aÄ`R:c*T;jgǯש^ĠRcwqR{wNr$V}7&-*:x t+j&$уe]γ:?X'F1ۏ^NJawg@jN]]_i/30-:ORVL )@#0pe[|qqs)U22dRNX8aE4XAȜ$m9DiЪ!eb o23 Eځ_^6礢!u7 *#\,}CTUjn.qdcas9.E@Qɉ==tocVNQ?JDs].h)3 J^3D{n([A($rvUoJў|M[)]P|Ιe:efA;Ԕ)(0)ZLl?qW ԋpŦ|~: KϊS;8i \C"jc,vb?&P*oMolC~o⓸u\=<CvT!ON%F<Ts.H`ԍ;LA$=o {tTy^y>@.H#i))K˯[PJ~6U@gOˬl`t|CwVZ]YoG+ fgA@K,v,{F#OWlY}:aA2Օ_DFDFDFZ)0&?5a XQxZ- )8Q8˴~md <%3c܇2[g5k1Ow)CDK.?G(.y$'IO0ԅy͆ 3 ҎKc)g'`#QX{=KX(ai=p/vK)s=t2+!3_٣a3wjɹ|L1Uݮ0$5)\{F84֖F)(Ԁ]' fʗ %Zz=̊@v,(٧Wb{3}n ^qG 607%ΰ+ۖ%_+PR$Τp"(vt c,|z,G |W{Cs))wg.?qz\ݾ9 Lz뿥~p~ Dl y雳-!t>SΥU~f|ஹ%Dk՛aBX+l, /6D\2LJLQyNϥfUݓ^x4No.7L:nstwUr:[R1*R1^ztf@+ð%(CN;#+(",T q 0oDht j&𓗜 imh,̊r4lay5+=zSĐÞ9_9߮ ]AW. g> iQ|5.C3D3@\6NBw {YxcȐ ޝ`}S ko{(Z_aoyoa@v{7;7-ozg=_`_wvӽSq4ڪۀJ` R?BvU4)L԰Hؠ0h:̔ x* ZH#vݤ哐6\qJ&OJеW3(HORp\NpOCY)m{4@Պ4}F_lRRܾ,.J;#1:`jcs@ W6GI c(N^$8]R\Y3hn:SPyȶ`֢~jxR{xROjxkӖ!K)oSST=ղVS\>~Ωv-Ïܩj5˗w>7s+:E`* :=laD 2e )! [0?/5*9;_@͝2M 5.L/o@'yKɴ^/ E>Ns:&3 #aLwJOQ0BĊ1g+8C ɲz|tg.+s1UX2Rο#6*.7I4(V$w$ HqYu$(>[,|]o(*Y+4 fPa^ك,z-cqr6 %bN`4,'%20f8us!,sǘ0 %$U2{A_zdS++#j8R֘k #c@VV`}QUx9ETDdR.(R(`cR+)CU}ǥa{i2ޜ'I0pp9[7?趨b417נzP }y `SAPc! pNZ<MD|*ˁ6qp"ɺV~Un-M ծrsK1hk΁*=7f**4ݎ>ofv.ćH&d5r ۃ 7uJo+%ҟn{9?Ӂ>Lj :F3ss~f 7unPy sk%2L3/~>ʑvvHlXB ^ MͦM NZ ӏ1?qu;D˛2r /@'~sN<dᬥf]\8: d%\a5+ S>( -.ptfux jOCӣzʷanNQIT 6Oߊf5}~̏քrgEs:\W>[ܨlR9* sm #TXqa, CdgU0֊ 4/zRs"-rsWeVY\2y~% Eyfw?KW 3WNl%m ՔUcH;|D9'Q Djv߀NmDԠemNu Gmw /u 5(؞gڹ-iyW##"Poӣ)jQ4XZ3Rk?O+3b=D O`P˴вP^$íO+u}Q`Zx^Ѓ )X.%St>[ _1~*xc欞~͚XB;~T;pv!0iܶ85e7x51PI8 W{`8 2Hi+TM5clm-B}d Vi_Dbqk!U>R]U)]1ぽ W{EtAf9><ًr҃p`g1]oKM_D.Gg^4OC}k鞭o`V?.`%hI`KԒ~5x(+ j[@/,6&^83L5BUoݶwѠ- IP-{j>ykԑK;_ZY %^C%b 4.~m.@i僎C*V`JDFb@I{1v끭N%{, fei.-E{f|?]|,+)BaɻK9~%J:qp։kK>D1ݓU#$E/#0 Z + aRi$L%QyC[FA:g ^G߃Y3Ɵ /=xۦT=$W mQHsNϜVZ !;P 5|5vPJS{:ZGL!9Wh&!RpэzMJapsWDiy%WBTGIJB)*=3ş7OZce,DO*yu?jGMI5ߏZME !hLes=,MjhBV0 ?3DŽT<9s|.u[Q:h!=0m,>& S_<, h%31JW)"1I)"1Hԙ 5^-6 +|t4be [j()L4L\:P~4 ,`We^?mN?ƬvXȠHyN_)`52H챗F"I@V&)a\ATp!Ggm Ht6nCK,ܪIʭܪIʭs]E RH X(ǭ `$bģVXRTyZE[dKҁ$zGZ D‡qA*'aמ$$'ɯ=;QFQƈUiH$H:Ev!ۀ _JSK*ډՠZeInZc&W<`$$y&uڱ(DrLE-0Ű E,'l".yH"% !ʎoj6ֳ~W22E%wZ7'L)oÝQ(OW uuYUcYyGͥ_GE+!:<3ʲHCx+2DL˿=,B5\>W~zl UUaЈq_>h p脅k2t'q L&v*!WL;_9SY.Co}H3&D*D;17]]6yt̆ \xכcׇ_>\L߳-|' Őj`z{DzΩuͽP{'j爴.8S^~ws|DU2Dqqaz}5w$CWit X. <מW\oܒs0ԅf~?eB ]:EbDz9w ('uw C>{;]CϚ( }Eińuh4TO8ރXR<_cP'%R ZVs*z&Z8gt-7D)ek$,NdŠ}) LCE 5D#T[0K8bJ0Q6Ne.R=p3EDPՂVHH~H7z- cg(Opma1` P"bNFǍYa^{r$~10 {,N68Mڳ$g[Z#vؙM~U,ůƄSb$섎ÚdoS&?OlQS*ϔŅr0f9V"Vm9-mYqoI4&o1=| r%sĔ0Gd0+֚b]h"#rr HD rT[`FP5Due G- ! sKH-ptF+[PPEj) ')s356L8P@ Vd89S9DFJK K"KȍfD q, M@z"J/%Ҧ"hiV9G-`}70/9qfT %+$GK'Rx)aNJצ ah׺ D2j"jyT QiX l+~ge]cI}%;[< LMs35?LS5,|O-ݻ|f_`KC ֣a\4ZCAh^p*Rp!. mnR\ XJCJK6FO)Pꭜ?\qkn%䞍(eŏ~[`-HC'6`} M8>v9h'إkܾMo6.^= Y~?%aۙ卽Y^+zax;eէ pqtO $ӺG14~(c"۷w+weJGb  'wQ8m hW\J\3ͮ[#qȍOrUi-r__8Aȣ.$#h=)5$ nAn'kLNWjkàpd㔞2.:eʭ^^ I#|_WdIn9x8&)^hKB9 'N5gB$sGz9לD|wJ] ʓV>ί<+<,جwaGͿbTؙu1et>W2:{͚ ow,z?e͏'ӿ4 tU. o23<]z+Yoђm?0wg` i0\t<`SAzk'u|G_L;X!ހ+ .e7B eOYC zTZlqJ]aHl3igsQyǰꐅǪR?vUysi߼t9LYCa=Tߨ1/Vy?PTg`vWvf:0yV4'}5s U$bꋋ&IYvMTaLRچeXy밾7!f8 jKs_ m_vl{ЦH/wIw%/21§!:xM?l8ej^}ɦWW8_-| 7P90ց@O sΰuV8RFZ)EJԃeH7u:HU&(S+2%pL>MileİfoiN*x+ RTDU[V[Fa€ch?~> [ Yp-? Ѩ9dv}eqngs@]?ϏK K 7A'8#Ɯ(X(&TC9J 3 Fk"6xQE^4j~<J_]<2Kw fףe;Cl+ ټWE˷CDMpeu'Hw;I߭Ҹ #uF^:%g{ yF$_G)W/o_Gcy3Bȣ'Be'@tyβdq 6EZ ~i4 dMooa֝qNYκӃJ`ZK=0EG]y[>DW+QңK-Eާgu7(Lj tQw25G`:89:dp;J IS?3HStA PB9rBp:+l.-%`#ݣZ F=(zyFWK%i G5z,aᜤxJ<*3+Kǥ  `3|bK q[EY*4YEO p ZN9=bfQnWdMĹ#sΩc4 8J5ɣK އq"#/'X#t{Pʻ<j$<ȍzȍkȭMO_wfQhyǀ!RTv5./iىj{െ^<-Z.n!Q%1"*hB;|<?Hg;#y}mk9@_&& _&w0Dit(!Z'R8}Ljx-фLr& I;`EbtB=V _gV$4e1g`әHFu90- +MQҜNl'8e%kȏFgB:,RLKˍYl#’;K@QuĐ$],J])z6t4 N L+sE%9ƍŠD!;&E. 4EMIխeL]HPr*r/֣aq{XUqN&TrE@gЄY7g~ܟڿb}]MgTx"dѫonοÄ N0=]gK8 酙}[xX4B1D8DaFق]CV>T+$RJF"N_%8{u#_<TSr6t{SZ#BDAI "CD[LDGʮ¤DT*#I"8u.u@ 80zot.;*Z]qt=@“ qR!"ꌪY%i6 F3 r7JrV!J󱠔P 2[P[Z,nQ8xA7u`#v `hfyZLugRΠK>صȩ/x,:m<>w}J,U7p }='DqpvCQ۝TCu_I?mW &u$AU;Z@fTTgԃ Ӂ4Pi-!:/|]6dѺwVw!5$J1rPsn?vZ?wBQ=(F iC"TT26,/ݺ~yҒ6ҙCGQ{WtCuz ,t N/>iM(Pz=^U:`rR<B's!OkkuXk {lncD16RVj]VRZ !na r rpv1H0)kk \sN\1O3K^S>3"dOแ'qAĒ|F٬`xRSZu%Uin>9!s-RW7ʯͻEsu68f/ޕƑBW^4`/:$ڔeS317}fv$9(vWE~hb)Bc{j}{L7ebj4콅zę[Nm#8ံ%M6CC()s۪r7ݽDcVqfr%}'Y+=67g{ V#b O<>zM~ڨ>``KӤt ,6I2XNP pMlt)%$YzsU@]yrmZp1YϺ 7"ިD~ tMßO;Lf88;]_JuO$Qy!0"NHGl׀ga1-fL27y"vы#]lejҾP;^KNH}F_C'ԉm-^6d}7}{Сu{19yE Q_@j5>֬n\=-Q]?[RtLӭ47zUS5_7Wm*$>Vf),Y3]~X`{/?5`*&}jW?6wьӒ2f=TTE^xJUs-Tm^#wy]a}c J[gc[^Q x|)Z5gJ=%* UakE0`H&WVe:/)+E*/9"[#^ ;>+&<7RzWx-i,PJ <*] WB(kzrӾ򊙢1ypuXBy4ThrKUJ)mJFy&Zi Xh"ZyYx,E] S,"*"X2&ih=sK =C+l7W{],oVߖޤi2*ܶѐx5|mkS>祿 9'npdwov.;.X `Nߍl#gt0wv&zYmPQjcjZՔboaUu@jo$:㫮ӫ`|qY-v?cl5m PVյ 3aJxK~+xw=0HeFDZliN*|N# ע ys޽؞TiP)9>I\VMe{{p{:a `?(Y0}%@ɄD l}OK\qZnCdMfLYNOp#GKVqj,U0 s\ftj̏Vs:̳iꚁ:C O동%ē ŅA=ϣ><PM['茧ytҴ%$SP?oiS@ CƍĪŒ 7W$JҐ#7"Nw4ӣ+p= hTju'qB&?'z!O %БZNSs;76aOWfz!A?1˨~Cc#fp4$&6“(dK&>+ݾˌn/0Ց&f?ho"Y;g"QL9dАS"2:'9{ i"i8S),@sH[2Vl2`k:R_$ UJ=buY3s$*pnD&*lv+J^,ىMoc{\rXj|f$=씖Z`ck )p^$3'IRfK :Utڲ9)Iga V؉HK-lġX1O+aA+cxC&`,'Sd\ں,u{2<,ZF0a`t[>f(O89c0||+1O?aGhļQpEfBeE 9QyǼYσ:]QQ y<")|RdI)̅`^M`1vgJB2Sj7Jm&a`qL " x[Z8qf?C,QB]iǙ$ēkpw=)쨛%soX4M*rVn!OqȕӜgJ2[<׾r,g,<+skqԀ`k@R(l]ԂŁi>%J{ M[}khȖ1R}ǧ kѾ]|fJ5D95=x,gvc9 \tvn1sb [P5m1S1Uk3Y"k߿(cΐ/o%9wu[:֤vqc2QCkwy nw -F/.4bj<5wbTo4m yicni*txabVuV6hн6_ YPSFi<Ӗftne% u^H~d.X6ˊT5V֩[ V]`<~0{Mӱ헻4|GM~뮣;yp^=549562K~K:ýJ<;_94;AoTMOsY j#Y= 3tꮹ-ٯ?>. kq0a7a~&Mf OHn>74ќʭ gx),ӵ+D#R`Nc3=u\LW,J%{q!gj$rH;XP(_gS69uG8 KK:ױkC{/|2 pl,N(RhZZ=PϺ7scTTBeVB(97k _#l(K4u͑B1q[fsP]5VXX@ɾ\qatW Y۰U<*ZrXcUᷘ\\wb*0CYpbqCY M;*FJ0UgMfv=I5/Pa FDHCr6VY\noNIw\0 '7ҪS?knvޝ"?ݨ(g@6de9u  ֍#7//ooTMe*@\YV m*T4ʡa, 7; #}+İBE=NO<z/uwcfOgo?@\kO1:v,%ڵ=%&hTn}嵍Xr ׵(T!&n&WOU :gNT2:NY{jOׅwyTr,q v* u\QqK[yQ:IDV_wW}Ubev[o\oF*bo_t}Ǐ՝_* ˇDlѕY'"_m* ben ٳJZ\=͐>E)B=FxJK}guSsW6wKNp!ħ3wNfÀ|~\8[!Jv$U0vA`gz_n[f琚Y[E2,u̸rK (s0GgpSИLZw9}ǼcևS3sv:QhePNK'B7'df+hkr9jr[F7%{c+sM}S,A;[9co\sAsm'GҊ=v([M <;pv0#U!didGT"#6 (yPi#)2VTIs2b m`AAK&<0ti ~0^d%#_ﯛk9ɭc?=7s}]kZ|ЋM~ |udC8>4C8>B}T4s͵Vsj)mmQ̕![x}}YesO595<Ɣ\vl+?55F $W5_~ʷG't?ۛğltiX=g9RХBeTr&CgUIPW >7')7zHqBlhG3-|ك v4M8h:b^ }/3l~)I<(o!RAxHi\`n}e`m ɬt8RmNuF=z4sAF%ݙiȢsHiZwn:.|?RОw2={\=p{?"v3[5AcQB5t<-h1ѾxrϬCz9Cn}rv[.X_6 yǬ@ÕlBK=IA³3/|N(Qӧ/\\&4{:xu['UV&HMkGqg7SN~̖}dҙq! vh/{ r⸠rdmFEC'U4O3r ~كA6cٲW'=bKZRnG -vb**;ܞV d˗y{16@qk[C!g? 97DJA;r`TZu]- S=+w6ag\ \Nt0ȲZ aܜUM6T}ap?y)>',5ٵ+\]|sۑX"MwϢt\Tuokb.]d9&q(%Qv^ #Q6yԧU񂤔{rt'+2}|v 7쀮i8'éoy~k̐^{jC'urjqi| wVНٴemg@FE;B+ޛ;{^ o~Nu_0u % e^)NBTWN\xf[(Oc]<'6H엹%-#ȕj;:Jm¶sO%5JW{}͍dlpT,&3eӍ.`!bvnuy6!8LEPX6΂[rtHqFW&S>Fܐ+$*X! F\mei۪תʶH+{-μVxx8~RU[fn\Y+lV^x a/X`&1{6 *W|qjٷky'wNQ,oR5 MZH㛧[זja4u{pFGELb]}ʴ^s;hDŤ=H*QP;7v_ّY0wJlP`5z>e>Ng1'm͐zuw]qqȣ'!s CM9q& @0:AP@5K%dBK0hu=% rH(=NRX͊wf7G(4~B[^9;u:IAg@#dK8r1)%T̑l*ș',YAeN:zǿ̍bv \v{uF(n 19E"G,&A'ɹ"'Ëc#,h2p%c X]u>"σ ʜ&$ITj%Ȍfi1>pR˶+1"VE-ĸˌN$鍂\@'20q(S,”28%i|WM@.o-,;OM}u@y=KYVsFPRdI&0xVL%'zuV9_8!X7/@[c\pu":k!0ȍ$UJޢVZ젼~93chTze֓Yprk9Q:mRE #qE<&R,"qGCst 9}2"FA2$'"y\<:71X jlU0VV9ʊ6e/ -H_Q&9bP;} ~D36;.Is=ǐ{Z,?1}5CqT6Z"G7om@;>>age|5& ۾J(̐i.X1(E2wY#BƯJzd"|!f])=~K*FsdQ*=K+cjJ9'CZuguϪ㧟>5?[ف2IhyPB8GbLbu˯` L6S?!z,zcTr[ 1XDWZrm+s/mmRG[mRgRI4ܨ -߰}ӯM?mnk;T{'l>";DzL̚Wdb[ vq|u4|ˌ}0Q}'p AqYlsk7Yj#ʹj2^3*msT*5G2gӋ_b|r Cv^m[6r4eM[>L[R0;;P ߡ S~gLft:%n:}oJoł@e O@I s`R(Y4QVxmpJ7T UTN54bNvtQԪKfrHo3̦a̾,Z3Sb֪z 鸞K 5k(ѷ.T"1/4؍RTP: Ke!Ƥ߲iXFKc$eqj9/zP<:Has/XVVFpUg€Q 덴"Rzv;&5*Js'zW'q4ʨ ouz~ޢ=.{̦O=ń-?#,{$B" UkS8hzӺ1ioؾ5;yN(CYZ,ܑ%w%'C! = %.9'U`iՓ 0K}<]DGqQZ{ /խzz"[ vN6FtܟT$yZP :htҪjs` XAw$#AK:~j)T.ml NJA g djGȩ&ˤEO ј iS&aAE.A;΢3KX8"bYjc tHWq10@p̂ƆxAR c+;>YZc+h_}kmTŭsɜ3y@!ođ"  (VZ{j_5(,Fh-e de6^6j58`&CciLj}t5F[OƢJN'4xt3", eRE#*"r%3Ƨf<'us /Q&m=, vAvwnm[u,WܠyU(O iԬ{8)fvqEj-(VhIbR&?[5EE%m7(ѹ-j35&%Q`JjDU%bQl9WR]_RMt`LU#."M0hWJ;K[0IM 퓮$l*T7(LTj[-gX3cU&ϑ*-jSʴҐWeH|JV36mx*!qU i#i{'A[HBoՠ->@Єf;5B3I򨶙D)9bBwϸD }FlLMh Ԣ؈u[E~`zg9{jPc 3Ny[@Q*eouЀf%quNE1ReL|ԅs PI ^CtS|xs5ex.m1h\6*u 0oAgjwX2;7A'Y vg')AGwmّNgCVIveURA#c%|]*GݳXC<wƚ=+X$x{Ֆ3p#?A`Ya9W,mYfw ïl +HBݩ }-*tiU:,+̳wD*N&7tb:BDV#D`r&"=G= 3w<>'L(0~Frѕa`tT=M CSj+޵wdm~3:vlQYo' ZInC/Ub2s}7lɭ&&qS6סo/&p7[0׼nmB]{K-ΌmOos5u5U4wT ;(?r_t%G>WKW,揾aoeܬӿ1yrb*?{j=USGJ洞&զ}$=1ki'r1Hi:h2pw%`[lE yM4MRiǺv۵yu'PIn= S^9!'pHOnw I6fzDnBX:p.l%<ӺDn-8mNzLb$3zD6&'=! LNx;yI w'cܝe0QS0mx=@?@sVکh^]A8x47K`'M'5̊lʱTJZ*9H&,07:c40l#cI]kM!mbr:Mkh7>.=dR&gizWVt[S[vʩER2TbT\H |:kmR˩0q\I>؛ ;r%&g t4q{!)O'I83q5^$7/:l@%:_1MuB=:*ڕ>칅BJ^l#J ŬcD_GAE-=RJ\c +ѳbs!dgs:+(ahL"f0X1RB;#XZD)4;-vHe$^8VIRF0h "&0)f4pN$^kwys iL LZG(adx01 q^P$g@^ qrnalDY+ ,cRk)6[%vrO B* PkA" h /+^(cP)Z2xn,r]2J>bP5B>/PwM-Rʐ.xYV.̼䧋O+@%D Be90%w7ס^)6_?ZOj:ufaMj.UF5OĚ_;kRY9ƍ>m]d&2ByĴRL m OHw@&@+=x) u#d|qNo˃B[xfzQ5vB!]Gmb&Lhf\!($,B*j$0AhUa:>:,L)'d0p,O+S)f}d(ȼkc_” pz”2ji6\^'{2 Np3^}rVRESEñ^وW)i1?%T6ʌ؈=* ^ԤtlNnxC-ɉTDMI|yv 6!}XY,5N@&ANGZcjV nX{X{y hiT#HS0  5*<:/(:Be*L)4ga'q^YOYҟOKeT"[n&ukɦS7vՙF)7Z y-lQX%K;,Pٸ Sq?hjͥGQHdJЕ ΋(] `Cwda_t gx@& `+lIIlj;?*BwPq?wdYp:SvՋ_h%M=.+0sP72[f:-g_2F "fmd*48EV>>-8)f+vw}h0<60jhXJ98:ôa4KCCZ įm6dn*io&!S g* 8,_H[pB]$n~8P(!ƈ9Qb̘ܤ# `D1!FafL6L v<%ȤQ;o vhACE/j?I%UOvU~QnxnB8bzlZP0qLBa^'I-Bb42f*D,<">I=sAqqfj"3-˭ӵ$eC{{)srP euц GBKa$3sJfhIrVhu -% xŀS9VyT2ZgϺl#,dbcTY߶KJX?1 t/IpSM>VC. vnBF+yDݬ%kU*ѱ93RWA?p vz~Ȑ?ajG' m܀B4@ 8mN&:$vvJ{|mu+n>7\̺pWIeH{Dž^ho'[~[^naV@çO1XSuqSrtM,޹EQXF.>rmZi? #5}=Ity"s2x%vBj"]fZ/rH]e=p<+eytRUw>"ѓʺi]SRL-H9%J&J,%D* $\h7hy-7fr$͗=[rN'8 ˶ *+5. G8$(YV=~$fkC-!$=zi3O1dȯȣs[]4BWs|IzEodvo圈Rrw^j [x)cD_\kpY]Tp_'#+Ed-u DQгc_cg1W=tCff-A=_c533%itd+BkS8J]ۂ)cCulBtJnVV۶nD0,j($q`΄& #BJxaA5#8cfX)VAh61S[r=IeXX15"މi] [ZS%D;K2j늴w.6\)1>$[ if"{1bcaɱtEŚ xA|Azzu:AMz8xeFGěyTPrMzA>/M:Do= 4q! úѐNVB0N~[ A7^_XE /׷.><0"=]x) :l' Ԙ)q(G\=?~l"8?PQVzyO("v4 TFse%L ( 7^Ԥ}+,/iXY}_hyYBvsQ?p?*u]=8׍ ۋz~us]\^}Ѥ WwT,ʧ1uTY=}X#t @^lZF\P8AV3εB+At )$ i;/5ӑE1H:ܺ낛8^n)Ɛ2IXvp |,"_+N"e|L?; YCn2T>diY*pfa0zŹ#ސ>׌ \X[?N"{G;F3v|ߤ|OO.*ϧ GxxW^,]h+?gO3+xV& )51x42}Mb V(]4`Dt@@ Iqx6wJ$m_9{â,\*^ \?AileLb,D8?VJ%y?7m׫ ;8ޕM\b9 n7CeeğT򲸿+z+F>?&3myg-Jź-V.7|"Ko]kPŠFtv;JH9v=Z6|"zLe55:&[3@S܂'q}x(hM ;lMt>9bcW'^7{fft}n+_?E|E؇+&xG|EFȦ>^Tb~8}8z{MإմԵJr?NxU{|Jޝ\e 4555p bzkhLLY-Waf&Pp5! jzD)I*!A1`U~C(EYTsi1:ƨ&Vc?d̔o3[T'gFQ}r`F_ pU棼i;e*=#$ - FV;P A uFC Szoa:k7Ѐo"Lydz*+Z'5|WQ*%cB*w/;m]*Wښ3X EmfF &*׈ŎE;OR0nߒbh*~Z L 4R[T CSs,)4rW+#MNvF%+ H1x AH%KPfIqT負ژFQZkFW=Bpr7n 3ܲTgXy2]ByUPzT% d2$#)t:0G -yCL 1c?^r&^"ݒNW+~MuHTB➎Rw\oG‚_OOpI`8r"֭lcP!5*0pK{xǥ-BJ:LUI")\&qdMP}dgHXmjQ7JyDRU$GWU0ܧ @%g Jll+,̺3jkPɯ\bR`Lm F.II:. A)߻*.‘F-:j-KW=%XyE@(}r26`M1@$'q߸^ e~y{k\^ryx-g+֗hޖ_myK>kXpYe"pr Å^fބxSuѬ/yƽ5K% ֺ<.*CAdQaZeWu7B Zyd4SB;܇+VLgN ؞|N:\8XfZ=fPZ=Yd{2>鬾[z}-` 48+Aaz$:*Cz7Ng?e*"/MZww?u%3}qk}|̧l6\Y }<E$Pd vq`+{}ϟXI̗藏YEe%`sw~d:^}_hdXӊD QYCFt p^usn vPlTٙaNoMkeJZbZ5w\xZ30"~BZUإ$Ѿ=Sl7_|o=k2/)bLm"L1kWƜ2ƍ%y.}VL(VYIƐF eђV$”PpJ8ƌ4^\> PDʹ1,꾹h/\뗒;{sYƒJ?U{l:Hm"%0Oȹ "MJjUSviofM̞TgWЏ/5K?SJb^Ab~ke?!$;eJd3lU 7 [VSH+=Y7O=3RNΤ=*zG<~(;^@{~hT>4KzGٚzuhih&zu~?tUb £Mɴau lNx[nm(;n #ܗrto?OxMRHL11h~}$yjLnZ1[{EfM>`{MO던:q#GE^݈VRAwfEy$no>SK~qHk*cdR~̡ڂЩAt646Z,B\&aN+\qj=݆ު͞w){8LȞY^/C.])WD+xtln0KΔGR9g=:>pI$z}W{|aQյ{";bw- FkUS;ٗo ^M@É@~_ƏȆ.B?ԿnG0B qʨAP:;S=ՠAoWB V`'t0y]YݩcB-'.jz/UVкC*??J;vZ+ ݗ+l\\,c¹?MzzawuH4VD,FJae^%.=f2Gd@+I4Q_+߱?]|( gQ}_!l$bo) Oq3*~|T֚QRqOVt*d+OdD&ęȝI"KY*ftRV% ۪j Rl\+o}XR>P_jQZ,<eͪ Jf@KTjQq1(IFkzsܐVLOPaWMNdjErԊ)4$`@ɎխN2+)yY>=;P(f@j!Ѫ]4MH..4fgFv~<鰷QHMr"7{pb}޾e"話JvsJVS9cZG5y41ed`xYugLȊB?0/G={f( p5{q"<޲soYBuAj@ }7r䍙(u `?xݗ_0fW;/ }u?881NWg'ފ1=y/lȯ쎡!CgtPO 8Ce .fاyl֕(Ԑμ"oM^EDh )1|U̜[z*^ ?z@A9A$|*ؽ*4.h pVehu"Ʈ=,FQ_m'WAɏ}yCaJ E*vh3^|>^\g>Ĵ 6Sj˻0JAoi 8~\8zWU߽}6:0-VHK&ہ L[R<5#m7)a!{l#_nWʯ̸XpC>(] @ &0Kr>(Cf9g5 8UZ9D,&p8RC0-<&Y≹d&t#kU)7"Z_~XaR*Q)I1N2qgPDtwBFA%++p*8k3x5k)%rLQ1o: 11덶6BAu3@q,3wӁfBhq:.+ SA_>B @.{?מyD/yJ^P)u>PB,aˮY3Lδ""2#֥`0!fpE` jЌP7&,pBP{q-ϑZ4<2#hX̆$Y cYdl,_4ipپ9`gVjsn$m7)ۗQ:ژמy'hwػ6nfWyV6C2?[8I^qdٕA~ZYZIWܛcA.>3;73$ꔓR%GUs Q҈sS+p<tȕ$mYl/Y`о7rSd@-0}:]0h[ʍ;GoeszQwvX-r-ZnA:moGbDž>nZ|y7|X]hb`cȐl>,`7<[#Zvnw)/]?r j i4'%aتv `?GFuÇnѡ@jyYC6iZfVs,t{v-QxK?h i1Ze_͈UWSΪમ4]X[ӚRÌMObc^MɎg|Xhi ;UBGR@^(y/GSkG[}pwq'Y:k X ޴t컓@=DVYM8@|t6CjmJjM+%GOh>sTNF/eʭ-q%?ebRF9$cǫ!4ġ^\*\]'[mvLh蚅<ݽj7{5GB#eʒ/aιXJE׌;DVm-Bpj#*&4.0j.RO̵{՛V6$".&)P0,8i%T󌭩8VE6lk"bz^.U]0OU0h_BU KMPEBdf(QI" $(d?18C2b!3hB~W7uWwꦋQ.ցPⓧ2l݈R -1,TS/ `_8R-,)e̢(@,HemUPQ~ .Rs2ՊHSUZ52 LN 4`n꧷uœZ ^:Z;?eك_0QW;DBa6M>,]WXH5fwU陑(0>Ecg˳}q7sn [|-F=LRλI n՛ջ+A9nA;;_b8J+.:uB'.ۂs0>gy|X:rEvQ`e^! y"$SCua[d- v-5LnMH39ԘbOQS=jP |D't:ڭ8$$iTvkBB+j8'nNu[NnMH3 S{='SaD5>-K 'n =jƉ(K˶͛W{4g$׷4D6q,,m5 [*8 homަ@v:z-vّ3XvsL*iĩa~8vNŮbϪ JiwȜ/aEaרVw Ui]_ aČɎ>Q}f@KI=+ Pއjt6 }.rKQ…  wRYV#̑*2w_Ӱ  z$?8SϚS ū&'p ,_^ESwTz-t-:Җ>3Codkڒԃ"??uW1hyWc:7ފ){%]U*<#꽊!mѯ,sR Vf҆eI/C Z^/ת-S+\2hͷx5iX-$\'Q} w!.\ou\4F٢iY~~0yCX]}j; k*u ?ӤX_Q*Ц}i~4Y̸h%mBڊ_r[@sGcZPqsxOsQU޸,Q 0.TR63{ 3 g^C[/ \.5o;9e'IJH|)VZZlA:ߗgKVU.m4Y|\ib'HO_Քn ԵХW(d/ Ӏzb#W~t7C{5pj{ŧhv1/VS_>OXxilx~u- ͡p7fy]6sn_ZJ k9ᴛ}(|k h74)ut-ԎU_+2Z:RDZ/PFfOG+}J_ nRrb* IL2¨eQ@QER8hfz#wK_/FWϲ\VQDFWHkU"hI'?Yh &Kos|3#cwe8t-JhFDkrrPŜZ J!j bs`I';' C~-7D'0&:\"fj<1FJj!Y* Y' HIrr7$L^|㩳Jx _jEE,>aR{^FTͳ=Mhˋv3 բه~D W )O;hN ܶR~&w?^ͫ^j17 |2}cm}u)&Km,DDƩT&yi xƭR C/!hU - b|+lUH a M54VkQLôf4b:--DU笊Vq1jNm"]'$fhH 0ǑHBUYͬ0aXۋŌ>eIqKI!e,fڠq|5idZOZdҬ\iz1CVĵ|Cѣ!w2-Ƿ/4zz8l=N̗z=O~8=! >ѯ>e7b~J#B^-3o7߹[te(jon/M9s4])Y~FCW(=S8 ZnCP8IG9)vNu)-Hm&4ůdTC)Z5BgFEJ 4WZ`P7L'1B' ::|pO90Vq,sEC<4MU:Jm-3KtPLJ<3${VZ(M{jo6|>ќFBۉ)@@f1ZVS!RKGT0O7u> 8'H˧&69ƿǠSC{^g(޶h˹2/Sh=-|%F2isnuMMOKp(Wcp}P`JV$ؿ긫Uq ϳO{V*mӰjDlhy}d'>{`G8M;D x lfGбb60 QE4R{ O[δ!D3`#GHXو<:u:﮿mK{\%%'wYD,R5Kh5Ai _l4{s״tKGw7͘TξO&-<9jQTx%?kzk50>+ٴs.I ev+3s D ys?{f޼)yzq)ڼ Āh.\Pƺ$hG 8z"@SCV(&nr5ᐞps@=Tk*A~֠Uj|RyR1R5\ 9T Uh&IQ2IT@c4Q:MY1JhШ4a QRd!L[aNAK{ת6^FDŽh6/wpPd6zPb;;0ĭlq\iCϥ; $E˫gΗG*V\zݲ⒅bʘ1MBz]NGS}'ug@jӗ{aG9CWp0MNM+DZꦦ_Ü{b3DeWA" .&0e 6hOWUJEZnd4A[h][ߖ8-d睳qCL,ġ}3p`voh|+srXiqx=6w-Asڎ,`k|˳Up84 "{b"1iׯnmʞp3hRl1/kI^^]T0`2p(='ƛnW(Av+Sw}P>Y?hrIpϭ'.1%%Rm޺y 7M أ>ݏ$ѽ3p;mSsyu_7GZG\Ȯ.O[4J\ɴ/pK6Nn恮㳝˻BP g(,n{5v v 5^iT|5{#P 5W}MFivtM5.A֫Kp`D\wCA9sU̖jFz\Fe7Hy;ێv1!c, c,!uw=1AARٍ,T*xPT+Djr1Hl&NwC w|!ߋ|/Ѭ5etdr1HY:h"Ng/znD Ηh֚=-Rou'hrg/j 2/:w"̑e)0b1aDRKNppp^vBv}~n(zA9O~X%O79.YJ:ZI9 SSۑ§E]ϿCecn?idUf$^Lw/$\^GVP|_D AN/ѿ Ջxt&/z$J^DMo,(-O!r6n?cTRc?l\ t(Lɳ§NN]r6!TMpK? 7}$L'殈ڮ_'ddi`|:]cˣ5u&F ,g_N{L i5zgw7#?RseɂRKPqƚ_A).ٞj )Ϭp;5?ܛ&Ad fU,> " ÄX>1zwAbzço'S$q`;QS3$vpy(GGD9A}'ZM;;4FOItgKbyԁ]}13#HWbSԁ$xwaJQ}],fӻqPKJJY!Pznt 4PpCJ-N+={ϻ WW1šJ`XmWe(XKb쟖Ia=a_~eIhaFHG4y͂͐VJ^>DJ{; 9z@ WCآMchre4KsY'ipjmoQz_UێG]oB`\ðXxT>rtn '+z6Vf+j u '.z B-oxCeɈ`~t7X}r(Ti=pԿlg7zx6Oʕ^FĮ&Oqmnq![5/RP.Q=H|X}_=u -,UU*](b@C(2HE&+B<8bK|J*P^k(/B#w+^!f^db쌰uda%̵$}uΐhy-n4p©BVbP]<87 Tَ {ӏnt[xS+-ur?.W_zV4P=}ڀopď6E8__䘰q~ۛQ9=̗ǪSq,M="U37amR-Gц՜%_~z)Wb $~$?%?-X*B{ ߀4#1-SG$EMPRHnejy-$7iӾMm->~tUvlRo7$q w|Fs1(ߘ|2A;K ̌tQ>s<{$|3 a߬$Z"AtlJӟ׾h5jx)V{FH0Lx tnkW s%{ ξ\JWIxǐ!g\tyd$W#0Y}LL+zP)&D{6`O9/4{gv2 oQ'vjp倖^M[Us{>,ny'ɼhBPʈg1j7DE1NQ^H~S7S#MЮĨI UxF(ˢ4J)c Ad-L)=U1A@j9H\NBPu6nd%YU2cD ĺfB8*NwTA5|,Ip6v_U(%L:.;9T3nr@RI g;$)/sbJyuKM*b56%dP۝F[+)Aͤ  d*N[-0G=;:惴 שT aTnN*#Ebk hb1島c*TwgaMѢբtV\ R:5ctc%2Dofm|f)/7,:t idt;[D[ +;_ǯ)JliA:aZJ%Znqjt ޺p]M㴜F.Q4uz,Wb꿮>)SWZ}1؛Ǹތ~:͟yL|| r\3@E2txi𯯞{KA5/L?fyI/fPș>X ⸪&$__ٷEZ]IN_|5ܻ(͋wQv.j2yA<"&&)pY\J \nVFlT1;@\ӱl$$? |noB`ha)& lktF1nRV;@J`2AhYBPX B5^qZBIIC.)/PYE@X#_o^y P0.c5!HUzQ|-#b0JmY&"&']wNkmMxEFZ j7ՊCWj"!4k%k8nyA -/B%R&(A}H1r]A 1e쌇B-G oJ0gQpn7~#Z 4 L0Wj"r~_|y;<>v<@NC CG4݄{TErdBBS^Sb'LTD[ !Q BحQiM<39pնH[3+t9ntpQ[3kڃ9-js>iiXυED;G;ԀmVh2#BA^-o%3VǝFRvrݬE4PKGՁ@e+9R#a^;j*[v'5 xexqy!(EӺ# 4bHx~'u< fZ\j,+( +_`X ΦTk44wu^?8j@>U(rg1r&g$[p3͖}84AP>ա ;uH ZCrCp*W[Et8asl EϝA}oo=<28X;Y97~8I]ٳ *>=ɫg/[cTL2vbЋ@D}~ !ջb6}0ĔDQFjԳsJWw(/~ 5ۉ2=~L4,c'Z^UcQ >X܍e0>O=͒O3qfI'xb\a35&^ﲭN?hF&p#y'ќXdqe$u3?o6MN_k[>vU42=P2.'2v4B!fLIF/~A=nJQŠiFvF#ݖiݺ!_m>tvOhWei Goi6{6 3κ-D* ud\#N |kI 頾B{oA oSj26Ƭ6 '2';Z2wok4W5Ƅz|Q/;:o? /O5MօCJ#gd 8WG.:=IK=[|-4n|ծΓv}!MBwh-lykXK0_4Frmͳ7rtkωڈJ-NJY9i$Cz ~\@-%HVsIb;GK[NwZ Į9L* zr~6"JֶUԱ勺I\Ѯe7H Xh\ &\%x2~a<ȉ z?zZꑰ] ǢLt3@}J$Ns l5rU0!5/{ .V0` F/y bK35O{CrM蘋#m\%a^4 P7?dtR E*:.Tk*b :WHÖvki:DB` zdElawtq{*>l g='(?ip"{:fQ_hcT̏ WDz`+WL40ٙgo_+Џ% QЛ<1^ y -~6acMP1ʃ|c.8}r!eə5c_fBb8lHҮKL`kW}; s~dcFEo2v<B/˳e2_&wU?&՚gK0Ֆ& QlS5\1ɱPK, c(Aac&p?bV"B}+߃8R+P"ԣDbWԂSFMN/ __Dbל)#!9-DHKq⛿*BR80LXD=Gנ3Am@Lyl.d1 sn޺B+K]ѭG2fqtqw˯B혒K F_5YO}9%Ԧ:Pg L)jII7@ +__.}E }-9D= |z=VA+}Kx= z݌i-L+t@>XH']_?tPC&8U  xap?\rB,vc@.,3n.|d&X|U)qM@* Mv%]J{q90(Ĩm@ǂve=b,/hVvp}E>?ʾGڸhȉN<5 T޿DyYF6 6ҩ4Zt.[La"§(/mF(΅ -숽KY@\ۄ4P tp̉ܓ7ڿ9;x;md{N~E\=;ܭjs07!1Ցu|?&Kds8b۴u8J{H19_NJb٦6]?п&]||% J*mVpJq#P$-61iZ?٦F`F[(ONdg`#ĩmF<=P]$NI5&z7}&|j#OM@56/0_? xz X "O {XBE'!<!`98y; h&$Wu1[H 'njJLx7h߭P*Q#@ˏ9 I< BJ fTĀ)VD!ĦMmv!qE?٫HDo.5ܾ]`)Z\4EF=pp,# '_DI!4J_/$n!,@D*<+z6uz4gi*,,KQ;(J+ܕCqW6ᥱ5UHJk?CrLVee:Z(-:W^>/;Ь\9 s] l:zc [V]H".~r/5d'cRH뙤i޴VcsapQ-w<Ȳ,+4qfI XeRݺv>~Ve~жLrӡ7LG8N.fi8[WǍd;oy{iowS,lmdhUF6JrTfiJ;!_m1(ȥvcKnm12tڶQGa87ֺU[#+f^Fmn gyn5fkL0t:Qt]wM-x.IdI~7$7DX$k8Hir8z M i\ѓۨb 0噲*w21%Ĭ;*NT)MƸzY'M։GudxgȗaI^O)f - B 92TPoKmUɝ7RY0Nl(wB(BG$ܩC=A :сxqT|SC_|3Cgj^V[UYulgs~S=m͝l:o?^/Z/{ Ssk4wokE3#]vE ?[|ɳ))`eKJ ~[xe Y*'Na 1@1neqWZѓΟ[ ~k#0w64}_V\VG{x7 D U-'[)%P{3cUg͙f"R] sI#W܄CL(w!Z5ht'_ IW„$WFE 5zH+#Cye HJ9^<޼ve߰퐐W+qr:1ӫ(8U's_b:z5C*MŒWFXk[_)AO㶌 {uK}flNGXaQO# 09Zx>q'@^\S$[g`8+F*=A[u#Sn[ i4( WDzཫ1)ìi3B3n_+DG}sÔYݰ0lO".V={b%̰:<w`lCmUQ<E :Ԁ® qsl…ZG8pBJJN̥nSd5i\5i|4 88/'9 qH4%E$}Hq GUiF9dԇFP1V )Dx),8H@CbJ AOfލL,ݛ8kލaXfx2iISKf94cLr,ه, i'\f&i1N)|2('zm_ݿ[~{ N_Vh~X{ߍt/ ,p?/?ih'a9Ow#[:Igw+۩wf+~ZNy#`lzH/;/wQRjz_[ ^J)tE!"+SBD>!@z1B&hPk67*Á7HîXͶX|G->d>uP᳗ٮq]>j,>&\Z4W)ƀNUuώm0x{>:ˣjZi?IF3lo'YU]c-[󨮺<,^kI3yߙ4h8Sp  6 G5vH%NWV~l[y%3Ndf*ǹyf'CҮSc @b$f_ .p\]K:@/JzSVv\TA r#͇`o}0S"`0Hunw2W?BOWkEz>y–$`;w_h$G&2\GDK-3Hb4x|$ ]jEűV[mb;̛qsBx{7?dn.B{| 7Z;rެ+Wy>ceynЛ|pş/q NUk5{-{r/)s|]VZM~؆Z,{_ݻ\)qѢr7Г|.ZS'i8oF0bP:Ft`lVև|.ZSMcGStiFZ J\]ۈnzEꆖF><+w}JRNOn׸N%;_|JM. I}Hߖ+z3 [Eo]LT#E;SU(9J 2_O:Jv|T-@[]}/SvH^ R }i3*p1ʏѪ9N*.õ#~RN(8!ye]듐'21VV[zt{ (fb$Ovizx x2R ջhᨋoS/z7rm+B?jPhKA`W8Fx+7h4X{ۺG''5J?н9tsCCwwy$I( 1q%~ L[,h r8D>!Ra 뛋wya\Ao.k;6.~e~^l\ G,5̧\/spu tITi3ge" ƯKJݠkw~y/q [M)kQ)]k7S:0'aN+@V+f`R"valSLąܣ 1T]z"j AN(6JՎ5cx>TxZ$nxYB^FXc]E֬8ȉU|kK*1H3e\kM˾Z%wQkSq֠{n|DվƗܡI912IIx!1㩡\Ch/,cEA7EF0.ʻli\ZyuAy]J[yw~H֫r9F(xVpPz7UHj KR13\;KM?3 \O{!zzM3fQ4fjt*[m-*wlU]}oeH+\ vhA[wunUS>͎g<$ܣSvFO'=}"weTMݤ | 3CrYFat*}2ާ3I']z6@U11^F3?e䟢K3ZWfځQC5;_Cenj8& y)j&F^|9߮ OQMax,ډ$ b‘szi"I"agy nv|q.n;|Îp(?!{P S-ƌq);dAO12{Pp?Y;QpM8%}R^zckW`G@ɱU,C Z/:|2V6%Z/ 9`@)%U!']%tؚVX[كS!x)g{cV`Zs!:D| ^KԱ(8Q"F3 HiIL YN[ZII&!YAb3:|,"x"3R<+w*:Xx#FtG'*1(rgtn' Ď[mC[>%0gN\;sRrgN.lgg]inWoeC.mƤ[l_fPÁii,qoP!IkgTA'*$%/yK b Od,<k){W%;jq|$KW߯;\|7h|t_q Zm޸40[10?ŭk%>'sw ?jټ/Ǐ %"hjŨl )Ax8p ­c %&l9nEbAx^^}aseLngxA3Gdusc6-g:|9khmbxoe(dfDib ݡC~gJOD)5x,2b<Ꙫv[T'>ө%k%8 u-psPdTjz#ǧQa~h 鴋RIáQ$}:wcsyMNU"2A1݅JIñ[5@NSehBJ8ĠDrFN+(U'< 2XDy$fTա)饣A&_0s3cB.0d;iTؔx':t]NrFm`N.]Kvx9T1inFO%.0*{3f*]g>Nvrt== %D Sn]`@F`!Qb=t:(833!Aqf|I:w@jOiC?]\ݺƱLcyÄ6RbY~~Z!g5UK^Js_Las9T:z Bo,fc:dE$B=eϪEPٜLE_=kPu*X-(׬W*Wrsd]LW> "U#2 84Hoۣ;FSu @R=rԸEtW0o-FW-?)]Sg|mYV٢ _-p&"rAcu.ǧ#IVơ8DNV<v"SR0pbŕ IiCkg-p.Sa3gB *^Z/fj{<vNJD@癴/%%x Yf}Qx;F#Ry Tb\e#Ph*Hʤ@ !$Հ*4etcG`0 FYn 1G%.^'kQ !MxH% %OȠo'μf,؆xV3Kݩq╳vS1Oh~]-I=nZrKn`nݺ78Xq"t+6#~iϲAzr L\/D& ?^>lJ[ xڕ)[?>ZW8Xܾy2=#/}^->ک^],+qvyu/Bsp~y[)W U*=C^6۷h;>Qid1*3)CxLNt6o4z{c矯*̫1yEY)|]kjm{ F317KKÉ#N5G4`?!`{K/*1TGk ϣKڰ %tt#V<)iV9}.FfatL-ǥ%M$!: G|^\1$_|fb߀U ,`A9,?L໳QJWFt>+B+)n?oۙ_%11m8AV,I"a9r |)i wIǽon֔mn%wvz|tt.iA\N8H`ۈ] - w8 G !lpBgbVB ^'v^,)=7!:UR$cfBy%@2"V X3 vWd{)zN5GBCe`UP eALZCLwv\Y/5찤D"*ʅ@ B[5 R H 3S04WjJ P+4m!heAFKRͩvxNiD0kjPS5EV:ԗuAg2 .K5V 8`Dp%j%Kk$d51une짳s0"H'C,c9]`Ǖ>9n%W%1d!آlq_~Hjy">\)0|~pE)el_eL'N؞xpRamQ̓jFH#O{DE@"\_zKxg=wkOHC;*ɤT\U-Z",xDB4@E= ib3g1kf$Pk_gnExgjށ%%I>F-+C8RA\ ! FX$Eh'2;֒?M*_BP K^}ISoFt38nSse[oVeS?)2CmY6&9^Ȩ#'V"!\9͆∂20O2p;vʉZqr( 4S@.F*j1jrݨ#VDR`2)j\ 'ED^9nAy*LYi櫔2$MB> !0TrܕxRS.3'}wY⇼g$οVBNw/Οez,y龺^7^VE@K͞W_=P .m%`.݌++ B<,fr(״0Et}YRaق\0"diQ{y4ޕH`=B2kms'cbNج}\1+wMm-֌On'pO`2Gg?78`GjIvf;;qdlcpAͽGtA†D 3Ɲʖ:߹:! >^&Roc hwFFCҖYj&nۇMv{:^ ~SK蜫jERI6?NeL"wejB*5`%HxHȏRl̡Uc~tv=K﷋oy ڃ]HLrL4HjfN->F yBMz #0拣Ѫt۽Osn`@8mnr߮ +K$+N2%џO[WN5hY_Rݺ'Lfj>$+N2նQn]i#:]h$l8ڭ E&S:S)O}N/3`E$鹿Su(5L3Ofv~=˥ϓs?_]Cg^9pR:砾Cim)a%,7ߝR|nS&e1/:]mnDs$C_vbpY@J W3V :'ki/ïeQIzq6Fy=3O?|,2T ŖzNl'T< ,RSrG:Xog_\m#S gjߜpȴGLj`{bZtPM"s>Bi#'?ÿK7֦$q.ggOEƿ٢:B6ߙ~0* o3Ef"05$(XMfDDiU"Dnjy:$q<\WƩ{tyUsVzj8G%sBdr|8m~1=7=#)Yܲ+$CU{njyғIP'6bo֊k<4VJ,JLڒVk]궲QӁH  L *% ^s%bADo 6IaGw:.[4tC,O'a7j2Xkygx#)]@#/CZ?1ںB!֞Z;}!bmB+Ds#2B C#Ÿ*O .?' N5jod7*h)G jjME%(lH I 6Jc^{a[ [/=VDE"Z3vݐ/}W詢}& ejˮ:OJ0T!9/PKӓh)%2OJpɑh)<`N.#so l $-8T(dh@DUtTIc8Cp6_b%t,XqAg ciB3+KAܢAr5ͼ>(\о4J}_.,:|3]|t,b瀽+8sYTw+P_J1bJP2ѹT0D+j{Y[7J{8AXiq/@4j\13f.ƞp:ћgFo#]uC G|/R1V=xmկ~]4hè-kv !cW9,0%2 S1É@װ;S.FWTN8PᐮEj"`ncX%1Rr WYK,8BFJݚԧ}yR^`v;[+; /ҸlKX.?ë.I]%g_y񊷳Pӏ+ƈΟgTfŒ#7&Raۣ64悧~|66ŷKEaR=y ?bDž`8؍Z*-Yw/} Ɨ 55KeZ,Y^t&挬mbp!l09:,{il/_b7-f{`W_ƁsVcTg7*{ƀP” \w ?砾|$ӃXHƼIXP)_I] aF #Xff\nln%\KL@ޜw KD/ N`QYWf =V0=dMvLǰ +oY!o2;c{aG eim֋f SOX3EH~.pP&fC5PZ}/ djO't>G&r z 0X x祂5@Z2b-48|dB}ZkW{|bS-/R쩖jK?Gm>j13^{͒|zp.nx 8*֌Vk :Hɩ{աf=DqksD~څ cE>@p)*{f$vҐ34äR֬EVWtZW }4a*8߹=Ԋm.[ֲ[bPŶSpݻʄpk:ԂU/ Rmc]_&PR*"XoefΥ/nghKbŷEgw׾l=&# ~>{S7U"mA5XI/a^|&|q4'c9wqJ=[tl_2r<$E) !U +$+N2џOn]i#:]hg}O5ǃiڭ ELIv&); ۭ+ rDMiKǾ[^ڭ EL|tLV}I^nt;*NA僣tհ,.?Ԟ/>Oymr:nT!zH[ļFZjGbVC<}qFٻ7[Wԅ,v1Oukx<^R"WbͶKɪ"?bwWȪ‘q"`\r}EeN_TNM!)=4$b**ɿԅ=nХN"YG/ʙ>7"ᓘkcFuo콲|Q^o0A|%#{q' sNJr6 nlv؋p4X*-k91@46%!3KbPb"E/ 3,à?i#@]n*o?~zߝ h<*E{s_tW4bsYk6A{~獺^}'6]J%WpT*9}^] *9V=eSbz )eu&ըeezP]\%s:/Jt^V 8/A~&L~k"̓bl_>^dn. jg7<џO:KbgؘܱCAE.7 Kw\Ni"6SjXDm4e) Ə+c].$b Y5ʎM Mǒʧ:Z맏I~?` *K׈Vv+cfN<[kᇗf^t}H9-B4V2VtTdOuSli'L *ȅ~4%ȕlŜ 2oՆ[j>2Vbʸ7N*FV"w }`;Tι~#`{(NL23nR _5ڥZ¦ʠio Q^ @aoH+氐?N)Tx7y[6U537xv:MAZڻMG3nsXȟD'Թ&Vn*r211ƻ])ռtJ6M|bநTW;ȘyiJj֬pvH]Ti6V?Rr)5W@5`Zj.I.("{4f_h bkMŮĴW2FzV;| U -('U6 Gb3qq:)Ho3ugTe<CT UlÇ*H[ RnEoh'L >"ON+#'(QLeCY!N)֬QSɮgXst8onwN ON/uԺI$j0rxFS]_{k-YLk_l0+*W)AԪ72UU!VZ#ӧFjmtq]u5x׫+0ǍӨ!\t?]]hX{S:_ Gt+k+ '㺭4p%RękQ8;;ߪECnY3R R nck4/j QO|ofկ*xz˛tqwT&1׃9mvβwW08կv67A`g뫰cTڥh+Voz7KWo;&G%|*/chYZ:\}7sIIL:ChXhꂭG$f[Yo`f[q.v]{A zZʋv_x Ѥ*+^h_jeh% cJϼQJVݝwƶkVڨcl̷12$B527# 3_$1|Ʉկvn OȸJYb)!cdPyK#(vrǩJAqn7E04AyՙD0H1B9Bەs5+ ATAiQw<%30s14K&\dˇn ȊfLVD} 3[So묹m *Nm+Lߪ⧘p%CAf!˶JHG+16[sUn R k%@qb(Cb|j0$"'6#g]w?iltQ|ⲊEim Gj5 N|OMu<7]/;;U<3Y;;Py) U)yɷs}6_Xg1량kň5k>+F'r`]P%sڻŤz~SԽEW@-g? rsJ\Z͓`?z.#^WPQj(`wY"[?N{Z=LL#4*AfPO_^yht$D j؟Y]LjY[ ښגr''0YN'KJHc.6N~[ńCذ?˥`^yaIE<;hrU=GM z'Lp)I֓p4~GvH4l\hnj<0S z;b A,_$R5ij`Dj1N!WW-v,džz~#֩ *Ʒ<̣p.@"څf6vS$~x@Cj !/ !1F!llǮg^,gnO%g1R ghCb|>kV86n*ޫ]=N]-mїEETi,Yb^gݻy'΍#/=7 yɻq18+mkȡAQ Qư.fgCcܻxzW]c(Y ;gqۗt7@vF.0" UTðѕVe޹Х b&mLњ&;{uVeOK^a(aF5h*mtĖLVSv'ХjVb*"~+}fj Z/f*uQ` ISjɄ ZPP<3Fa6mx &[tFgf9wٹ/I #GjkUr @>S|`6{n[pMpl%!lJ1g'`JkVb:ІD^rVp!騕mNVж!Vf[Z6&gO58{zPbB8|Pg0-F rJO'qٶ$QT@\Ά&rI"dRHDS !n]DHwJ"גsnj\Z9%*0YDReQ;) VEM9rO6kYLA4xNl H$oj7OQT;h/V-JR :xVqVYPѨV22rhEeٷ))Cqhi!FۢYvbI@1 lP*`RƉ .ζ bSdMLYm.TpNg̩-^$Pw$N/J(Fc )1*c6ȭ81[bZÎ}P^Nʴp ^D|6[Vqbw|6|Ё1^Aҽ6 AN. eyfSU̎Oހ]3t @ugF$E3${RI|ʩ䠲F٬JGD Crj9"E EeRMbfD: YM$#یFVZQ)IFK55g9QB &+:Q`t$G1 6.6WNbƶڣ6c{ #Xvp ,{K TEIȀlMح&KK!|ә8aF辵AtIlY&jq؈och'9%朘WV=wq:Qjj9>yE?Au@KZ1_R f/J q>`m J*9L2V~Α،v6\d&: n |.J=߻]ds>&i/=ܮvݥS*ኧ׼}wV6=W(ntd|<^Б9ŤY. 1Vx\;;@Ls6>oc۽c sF^S?知)zs_y(s/g+.XIٻ9W_ F.dll؇wz r%iQZ Y=Caf-=5(bb:]1L -t[lCܣGe{Jp vϩCZ*QXj:w2&v3hsUVܶ=]9X٘"OǩOPDϼv2:&M^=qyY}oBe6fZ\K9E}KgzFcu:"\ xRElԡۡ!d;z:`KF/w:CЖob0PArFa ňJ! J Ie69$b6bPvXmSOZ!{XUHCJGGXNjm'f^s}vƆx¨9 a;[!. ԮE F a?sP,ZN76Csc"v`<3hAj;!3{ G}Цs;"V!C)8W=$zZE|˖Xl XuZ(C[F8 UJA#VsuMbD~^kTZvFNk^^ZRԬ!Ohl[/pH*[Ӳar;۳E-MO ͰۧmeBVҭ9Yaw&U p PwM\rVu;6B_U A"NrD%g/O( %kL.-%%ORs fg l >Wu QI@f<K2ۀ{Gh s'7oDl $6orkSZUeiE5QlyRy 59wmƶEǤ~s_j8Vwg7' Em< rnL%Txvk͸:wp:wu2 2PR4京V eSk@;άMcIhn<>جJp{ uȬrmUD-< *Z梜JUbT8?Cjw%uRM f bZJC]( ل뢇mt:L كEMNȄƢ)PVcF,mdjZfk$ Ѫ$i'+oJ@gp%ԅ-IMUIP% "`}3D[,ecpOa\HWD:GR1b"0ꈎp\}SFBj $WJR1 @v%UިSr r0dn,H&Ǥ~s_q}-u1m$z١.UOO.Xϊ|H/k&yǏ\ X,5Nrմkzmq…tUO:? kp6nsDשn+QZ5g~iճul48m4B`$rjɉ It ]dǐ5ŪqYXpZy C6d]PzEb@hM A$Ec{6샲x?îZܚS-}R@X9JQvJ~J1WVc *V>:QN=팶d[F*Q95.B`Ԡd"g䏍QﴩSZe_J;L`L7Z;4ƏG\X^PiԌL]B5~w'I˭yh{Rx1,RgrVxL7b3mu[1b,$^07x\εkcѹp 9(O7O]}q9v5RZ6ҧx4fh1kV~*05uFimjPxDA+lmZBdxXktl.^=9evG4)d8G'7w㧟C?Ab!nnʧOH6EB!נ+߱"&Cʧ#\Ǵd!>CH]g?R 9.y)94GDY ^5Dkfq Ka,iL.5v=.R;`RGDnNA:j C-qфCA_㝆[F8iSyFYrH`V2˕*`JVI1(0;xB|BH(% RE`tʷrĝ|]mz8Z J4U";7pNyר)[BD.љ>{0v j\eM~z~Ԉ|ZjMKjѫ,Q--R Kc"n.%U#ES?}=ʓ,L*}CS3ARgk1^7kD]DERmZ̆}Ԍɮ;grJ׊5 "("L9yQWZLNXE_q6M)h XafG!K.5ӤVZkb)RS{;Seu&،1K]k]-kOP$ ["aBQL-yM25hbX3% QT! 5jLk\C ĚdC )JEC6%x ܗ1Y)eӤ*Z"9eS$Ry @|)ƒmd[cl*9.!̐~bH>8 \SFS?AٗY$UX,o0mRٱ>a?3G D/CK_nZ-ӪV8EKՌõ:Q/rM}n޻4glf)wzI ~NlۧO|1|' 澰bh׻'Lh۾)ڶ.):K}3t@Gz l[b;f:C [ a_lNT.>%ޓW#[_iP 1qA$ 6a8۶f5[tr!ω$gM/ɇ;DXכ_ڗM<:ltXhLF?ʷ0֗~f ^/4:t s ]ሺBs]Vn5b&)dݲv2gۮ_nKG^9}U_KyQ1ֿ{&U9>ջnݟŤ1^YOZ>Aِ7ڸU|QNG)5KE(y2IɉJłɪw}fyfԖ}RI4:\ : 3PjX;gvɕc58e \pzd:<3{{ȖnɒE-yր5mvXbj:׭j#u%cZw9u\kY$<;/C`y JvnD 0QbH)^&ęMJJM!$L?#xUvG{$OW7i&q5 38$Cɧc9DxQ13s N $PTFӨT uI%ѨD*~5J fi䃱7{MMv͗ ^]dye:wE1/%!eH9r|+{ @'hOsA9k&:svx\q,*s9`._v==;K =@˝6^ JY+TFsL1&Kޥe+; եY\ Rxx΢.iEL[8@}w^vC,/\^S̺}aXv);+}Fc< sM5yNPF< *rlзx)(%ut#,IjW5g1er^;DZӺh$%8YpВm]@\:͙kB5P@2n 7G-OCe0LfP:T*؁y1 RL*1!TTBb=4ˆk W߳iOґwߢ$y7dfKχ/" fs17Ʃ,ݙ&6(&>U/);pĝѲB*Ț[ַ2{d_ {įDcE)U&Io VJhIDeOLH h@deg!Jr>{ =hy11NʺDj'+;Eő\3fLWTs:jOorLM~éE5k0qTpS5Lqj+Eq q䈆<>7Ո|S " &=$QJiJXTD*ь AX~pٻ\r^U5oOG3DR6[B[[/0)f:A~@Mm޺d#WCDЫ8撱^8wp#jנm0 t8ǭd.kɅl9kSBqSp@ȇ"L}@~Hw,S kV馥JII VѰ`lNQxS W>{)RJ&<$q6-_˭K澜?$WZD9u,xWyp!VF ݖD\s^a4)%NI&H]͡I좮W8Is6c1dC)rRwXj[!H;+Oנv>@7 (kWں`f ט؍D5&i0_j6-ӊ{%^{1# j-%WD .ԋˣ<~VtŐ\~~zfl2N-XQ" QD y =A:MQ4RtL&/^|0VIӨT2r |&m(pq}EVL%\btG,v܄oJj7'jp7LdfITGsǖV R h%Qk!i4)-U Q[d)y<於[UE ^ɓ<2qK> t5d =ݓ^Ӯ|` ` MXFky]VʊZ[-zJYaX@ 8bXJmIypǨq$8< LJ 0jtZUWXQ˰2Gժ>V8 Li :>7(ӎigF m$AI[Kɜ ؖ ӞwTنϵش-'L` `:XAxS> K mN:VvԒC[0COZ82mj眂k9p܁ZA f);1?[2]{uAv?<xTr6v[./X6yKοj?'W<]q|\}/~|jG.ptӌ 7.V7)l_Z5˫'oVO?}|Ρڪs@d@?w] W.Eo;25<+/JHV!mJXW3nb[͝bӽ+o/pfMG>WE{nvzҟw!}Ǹ@xf ¦8i—p`e# Nkl\zj'n栿9V"#a+윯譿m ǦQWW.z [WT"Ds>Z) &C.=b}d46x8^.̖yH3Ѭ+Eܦbթ\IJƪ~?YS_J=ߏ,N7M{@xik#TNR5q*7Ao8:=cv:(cNJTHӋ3[8-'u[# m<:-1ѡjl /ʯt$KwLB~7CjPpLiaZV0и@/&k"b72<i b" q ? (! 0 +b\RSV##0YK)[Ml2!%Ґft`,Vo]!H@3C` YքσV 'i@;CsAhHݓX%'Zc@9VDM`Cʎod` tӊ7lX.hz 'ȻwQc{Dw؋ҽ~$~dD/=tG/,a(u^u6JwAxFrJ)2Z0):S~sw=)&9ߕL ~ppQI*>'zwgv&?ޮ)\@.C6NuWjW5-:3ӽhs=XǞl]Ћ5,vH `A D1OG@1^G3Eg.Y2m&r`%x$0uUj`'6= ҎMO4x~k 0> [Jr59]$7Y?=9Q9LGD<ҒߖbS#@6qLIhD@n ;ݾ|u6B z;=@g w:2I[Pd譺 Eo^JB8x uOU9"Fexptt%Fi0cR>Qdc2I+֬ר =!m܅*K7W&;?OJ?ĐmmWyM{\u[yٖ`ν8!xn).CQV I\ƺ#u]Hrޯc8 Y;4=kYRAuEtLP.nvm( RDg;rۄ<}!ȸL ϨLn%$.d X/-Q A }Gv;[M݆׾$jѓeJ50e]1ۭI ללEd5ל 7R4%P@{@{-8]y |ƻv!Hzm><>=ؠ-zn'- ;D +# x2%w؂9(!Onpk!_MmŖp-1&:z`G GHܽ-7wplj_UeJ8A(32j]Kt8b"q %C'J5W@C((T550"gO0֒:“odkeU 'F8ԛ@ôS` F4%%DK߼d~۵yv"">7,2(.W{st9] VỴXױ EIR0<=# IUgJqW>/>]wWoS &>7SR Q:7_yء C'ARcA㿋E*5}Y2|TrQ/z\&D}DdS}TphQ7muszC8^ NPS a/G5֯_e58A n/v1'a"9.i'U C).7V>V@I)H3)l3 Mܺ7V0eny 8 7C &,@ԛ$]:ÖT:_3",hMK~PB/&"ݴ4XLUBR*IW%704жP{k*qqVPiа%ȼ| %Wȣ1TwRШdm+5:r_G>eȃTy9L&"SТַ[| %G.KuMV* +A_QyBV 7-VMá=| %J#d֭}uZוFw3GOX35Ic>aJ^Y2OYi,-TPSo+x]! gj IMY\t"6> `(]OWFs, E:)+@>mJ IgC<7J]5ޅ%$i5z\@=,M8K3wk3)2e{kBzn|oE1)} L)B/qV]FPl JZ.|Iŗأ!|J 7At:6CLX.PۇyU_ϿĠį>#0vCꮘ.Lk냸>.V2^8n,Y>:7nA{]Tߴͭ`7+2ht|Lgʚ8_Ae;TDo8ֻVQ]"98uLoV7H6.D`htg嗕ՕeMFWϐΌDZ s2*Xtu]3hUp1EZG Xc|"ž<oDҰ}1_}M(}q cM,BMJ7S^it Cnb$ȵogeLFT˭ON(#Nm{<wiy#Rti-;`  ƥv9@Ik )lX2Dh;Sv`<=Ǥ(;I"&)&cK|IuwoޣפghO/jY$'ޒ/%_FK|Y|vˇÎf"ǭT2d ak0291/3KdZF1 uf|D|>Mt6ZZ2Ķ(mMtB3;[%ɗRRt/EJ *eHK$t+pe-"8Rf j1rN8QU@U2˜rx2V?/ck6}oWܪH*Z3$Wӏ}s<ϫ_wI> ٧HsADLC7c'r UT;10y>GNSPr<67kf'%%cSpnתausw+kvJ }={SP<=DŞ5;`g_gcpH$E-"h S_RK ӽԂ3(  [.X`F#4\0 k$ 9#՗Z 1'5К)̄@8$K%a<3`2T J@2bH bqOX[ub@Z!嘧6`%¡̃H}TJ֔Cu+dF\@Q ͬ0\fSC49p'm[eiRS6J;DIars܍#"*P cIt&|J%'6?vәr`Tqs WQ ZH$[YjsRx *6@XHDsM7 x~Hv ~WՇy\8} _6,XYWEo 8{Z|؂z8y6']Uu<$)jg4Ruh9PPSĬE.sT1ȑp6H"s͘AO/b=zJ3D:jr;SyEC&3mU.:HJٙ^T96a5wfyLSƚ:ڻSf*q~=۩Jj`+δD(#\Re[َ#Ն}J GQj/]T}Iʍonf7K[</ʊ8^ 픉i )[x)ǕÌgN5b,McIR7YHbF;FX좫[j[P)Xd U\ŎT{ź{*f)E65%*n> nNיG$C{mƈ齚DWPWS?~ER2r rC.$W4F1QnqI*T; m\ ӔQGXW=fO?B0MzgF_&{_R'ḧ$KN4# ͤW+ /[> \\|~` (P\DV1k1w _~zsu!d'7'rLhF8]N˅CgՅ6NdC@ڔbjN)~%09Uǧx5#G&?9ͧ'FGrAdsai⃭;#S**9&ĪI}M_=AZI\Y Y-X!!=%(B6JOב~RaJql@ X:Eql0 %bDc-I&,F* Sͼ1*JרT78uCg\"Lsuڗ5Q*#|⢃ Լ?"h-G`A4a XWGBqw:H=Ep?P>&$EZ! "X x M,"bV ZK̶3D"y 9cSfӒnw$Ә:$) >cbQy(`Z3c6%z"sxCcmny4t46|)5~GQu2X3}F,(`9'Z51OG,&jXaQ`@ˏ0FB(BU4d"5g4y*vx*9&3%SCADrVn"ƐdH*ły{SX@淽dO)?nEbMV[\n~X 'c`۳g|p&=zWn X]4IJyn\yogf0cۻ\jq~_sZޭLb)՛+oW$Ÿm^o?"F|jMn$u7Q.m~<8>^ȻO:+`όޫy<d`nBHDc24,;y-~$Ȍ62Fz㵟xuS Vx>ZcgIݏw!ÖmgY,G7jIG |ꖿ~.::LM9Br|ux8X,ۭ<#Q~O?hŕ)ˏ3Xǣdtlr q2Z }>sEa^J;"]o#LQgIٶq51XrIhwG"5Su XȥE5\ŇxkÈ' vL=K2Sjqu(% [2 lmü[HDc&^jDk H,A*G,}',)cEK\2=5_wڙWQR%Y_fR_zcH-} l8Lu=7/FbR3}n pr|y2`h\dY[jrbGjU^ 9P2k.FA~#?!z̹w6umLGs&[u~aw{v k<'|h˸}M{on"D`CTq, |dȝ hW9G|"x4ρ )L)F~YDkBpmbSG聆&CnMub:MQŻ/L ѽy浗ޭ MM5n0GnMub:MQŻsDkLZwkBpݴ)l)JR"s"koFh}(U,3˫fzY?M|bctS*1NUk*R$:>Z :( 0U\8"%@"em"B 1.q̰Vzl}1)IPFotkh Xh<+SsA êgPg[RDk'RùTR2 OΥ3R )D4RNVT*u)`Dr)U{̻oC\͙Z'P+ W6؛tfʈWҀ3A.00qc?/S֋ 5q 2zHG${46t,.6`Tqw?/VƋl-9ףﳻ;m0ω>Xهq>>|vnWB, "m# -. ~؃fr'|X-0>m-瞀LM&KgIJs[TY6KojQ q&UK=7&Fz&,+$$v/9fs= H e\:&SAQsXR_lKV'6'iVTrKmPwkn?}Z̖jڟgVI:/q,K'q_ҙzM{.,ò} (cR{ )QC{AiIMa1V 7=єT+ēcPv/ [ }חW#G9굝@-&!bY,D399#X:m>+¥zl#Y\D8^6,KiI>L!K5 6Y:y(6'@V'UEq iʶ@uuA4:EQ*HO-؋t(/#k~;)fq$tLKm}@R1%0CFB!3µ˨Tr#UNa;UT]тls3Iwy6_Mt^w} IukJgR`jeTc$r2%< U#52^*7Lj: a%_o}MUY]/FBg[b&s1% B4@ie[=`4%Up[ "pT 9N6]&P|[u8U<c5RO9deGrʰ r[e((ӂðԢ@c8G4C~ E_Ã{ҰsR)p%c(|`QBvDZf kjv&GJ_GU* 2my n^&嚡ؑZszz5C %-uqY)쳁BҲ8 ' ~t RG<_i%@(/R"“i5*U(46Pu%-ƗjFgbH vrj5P8K!\R4 fh\BzH&mO ъ{lP߳$7P(b@8 l5l]4P"Bz*PWWP7VwlFTDĔ2"$;VZu*Ce[tV`Sݬ:=_jeUuHq]חZrgjbחZACb(t /Cdj K(ßR3 ( 3*QͫX(7Ř/mYŗS.KDIk<&?wV{Yhb=EH'9s! oČne_Stҋ%GC4)Zqn-6 GTN7bF)Wf<#[!ډ)Sj 1B)7Pp27W6Vt6Υ` âpk8e9"[hTXEEj!0n$WվFbVZ@颾MầI[ŐLRZ2nRHQ @(P MS}rQWO_Ut[[|f'wOŗEw4W:*#0FN ?6 tZ#"9MԳ*G^[}%\{v%O}ImFT -_zäգ0]&‰Ad2:cq`7JsZ"nGB,_aJ+:qYFkOYcJiʲdJP ҸRe2RaEajtsnFgڪX%u%b kFI!k5XehU+Hp.QMw!qUX)AWV!i08R\u;~q֬QeL+g`*L]u>18>@ L_uy `P}-A1&)B`{Ь{璙Μ!,&9xf,NY}ӵxGWtn|EE`KHӓ0!*PYlgdUBxUuIRU3,]VM.ZxeMI'XmzRt ]2LT*a$Jlejʆ EnJtIgp[ fkTsYpYvV:ips+R]nSiJFTڸ@`QVuThO8k5 e;q,P@JExb/4uV(nކMnYcx܍ tE#_~2⚥+,lF27~`:6MgG#Ė qjުlA0UP=kHsِ2!# Uu@B2R!n]p( }q-'Gڗ: hi]G@`(5CP!Ǟ$Z x4TXQY@mkVMLD=#&^6@7:.*4jwb^ǖ5TX&dasoB.sG<@e:Jvf7x R!E/ոW9KX,,JpZr}+3=nxKb1 "dI > c/&^Q.Qn]YAL5'w{ʓxge+Dr&u50Ywg ~~ޫY+e`^ |duh"Y Y:kBÐTq&*&fՏ1#(icJE(f1fGqdB/,Y2P dJ1f$n!!XCΡBB{n,vN"At~HZxGDR1n_ `| <;8/ho2Kn'>[K"3zd=ᰑf8(e IE2\@Qȉ.d0\8.t?0Rgstam"%3zwJQB>67Ujy~K|ez砗p??ɇO>,~j qQҒdI5Zٺ ՠt[Y]RRfjA(]Wd'WzG3 jsP휃_~?B9q띕mu_s7^nG״wU{" GS*"5@!c`RDbxpAꛮ(ZV_ϑp9"J] #`KEO$z-8T\!dBA7JU#10(IܒDR$ۇ.%2&w?fYZkH=UEw[;#k_|\DWާ {Ƚ6Js.8WE{3K7jt&bsR;XHu(<=N8:!.-tOu}N~-v/pmzm]ܿ:r 4a;r W1Sn@;sz{p7ÐQ V)ӺĒ4Q(hP~; Sj&:}&׻ ; ~)ow뉵.~vK}>sכW47jSg\|zyn~u2Цn- "$g}=74ňaLķ)f8有fd8 c hgrHCߵ;$\*}DJ>uq{߾;D|Ix<p8cN~̘yzP>lF&ƜD%3:qs5`aI[pσ W9X/Fsq+q>~J s3ǹ g¼ .34RQSO%:o`SoG9c戹;(}a ?'\2.A)*'Ffڝt o5hڥ?5pۭ+2m˶n|Ⲧk⠪^Y}z_m|kVd)^o2!}/\6z0M,O aí[ۦ6/naK ͹l!ScݤjHO8 $tR3=n9"䃇hJLifqn,rTN7bF WVj-]2pt! #yFM8IԐޠJL JFYK{*X׵@ьoH I$æ=}&4P%vv&Mɭ9 (5.o,hZzQ&rءZAiTX[/@-TnF3au }\:1 Ӄ_K >_ <T|HY\[a0SOgV_] |c50k]zq%[xfáV3.i0eb; V}\.Ca4jv!Dqj`K8z(&s>AO?&6M.կOWOm>x#>]+u'ovwU&2&1#/ 9 EMA5iʦ)tAYegʸ%ۑ\ާW5^ȡriU%f8)fK(*!LmYI{^ "diCZa9+aF q<,g%|sZ=.hE/h;ZZvԝ@93})7V.]'n؀=*ЮUR#P.FHϮfOA\h%h{ tIg$c 4O[}ӵM:좱x^eSaQ ,7]݄.QzQUX^T|QUXFs3RaQ*`)ZftJw<9>B=kvo]8W摂/5Wк? ]p4R 6,v7(S\>8ްds&1?V9 [UKz8fz"7w, ,!d-d̼O`gLhRBikb qvKFMʪi/0ٮ9fyg!-0f!LX3кB]T m~l |%'_sJ1<翅oz\r A@L$t{pTKAp%chzKW< Rmn ~R`3adpz.HJĜ6LJ r@i8ӱ\RSpaC.ڀ*/HKń ZgvEBb:P򒛆eeۢhJ ፑ JYkΰZ ) +-ީ_YD#*<'jZ[^V Y,Uꂛgz$_Y%HVbU4 {!'zbvv{}]uD=yF*)b*;v/b顂`",Z9\Wv"|jV]M] ''𫫱{A8)YM% #6`&mki%\"4U4TJYꆙ0kJJʮV`e{WT¼P:R3XQh Uնmh L(*6TmmcZ T[ zu !f8a9G ԧ?%-nBSvr''RYUZϐPJ/%|7eb8]H:l/;٧t>Z96:ȇӞ]'vRk)ĚwpGU{kb/f`~{,5"JoJRi $!XK/KXj)[|V JAV;~hO4g`jb m1RuZiH#Ӄ$AC@MVH[x`n^Yʡ̘ )ɸ"ɌWo%+D?;+o>x kx=dH 0ZIWgu{W?a1f֬eNS-Rqt-.a#JCn?A8%, DžTԒrz\1 ?Ԁ2eeq[w%VE`&nXϡBXҒg*ө_G8Lp,a-9iK/b4 =FcdI8_5˽EqR'&1HAR6U(BeGB*QyTXP`%PleRQ)E3v}>ЭzZy.Qlw[;nJb#u[ViUzw>_; [* KrM>>}q!/|BG$da:KAEVs_7ZإBVG!lKH^`F%޲qobq.~\u+)PJ˕xnFn&AÜ%y>xg sh9_ZY>Sm)6ؖ4\-bkrJC 1@S)nP–ۭD$񤁲rF% Vyj;b3@F7 R۶.QKj\Ye)fmORPtW(K)N~CVR-"vq\8JʳB\(qpʭ˂_%=vR *)|NCR!Vn)T@,%ɿJ%vT+ɿsV\8;73C;5%ɽ0qֳҁFj@1wHF~rg? Yݢ"z9ꞮQ ҰGfG1Xi yF\4{C!{{c&zoVY9Mf{I] sB+;!j?դ0crtdjBL-RcjH$yaF;t o}C;ytB  (7L4KKr=-9= :SpakDonCҪEE^̙ 6a8JȈ&&"YL$/ Lār[8Hlr*`˟{)+ z 6{6i$Yҟ5@P7SzwxGBc` @{Kz/n_6h P}WaD3?zl[9_] i! Whr>/|ʀXFǀH"65yufFH ̯]pdR^): ͗4,aPqeiU)tK6+p˺-5kPZѬ0,:I([|qUY1RZE%߈hQڲĶښڭS@+!,H9s-㻡8Otvj0y&9@/@nj+%:Њb2[$|4n‘1*IH:=;ʛ5g ]3b$oO枋2o=fq3 fa:{mv>y=tRV|:K=7` n7Ȕ$qr0(fxl~8W#r&QV`2Nq,Unu)!Ukcoyq|gҞѹ!ljsǯc&sH9Mo6osdN8Eb=269]+qRfF %Lޏ7QRyPL-%P97*5\s3Wd!y& $4HW'`]{֟݉w9:0Xgc5%PtqGGƒ aI;#&AHO9 KPx_B.J Ʃ{FwJ!1g\HzcL?~mmerUoН<7l6uLͶڜƷp.|$NKM%t?2Y <~Y6ҏl*elx/4xN;RR\+P2qW%}bO9o+) +w1 DB/L_K)%rlu'x\NbHg^Oz;;3?#aBX(7t%ж`LX]j:h\@ 6IF= եVɗԦBQ.2bQnMMlrDښڔi,N(%Ia"jAn45 'bU6p+ǜd +!k_"{ 묛OX+S(^ _v"2ן2(۵amet^Is7b[}YR;]ە}y8UB6ywa)Q]fK9ы*H\]tw> ?NNP96 Y:"Β6KVRf\N`{HzxTl,ʓW+GVǧ9[#ղ40 Lqr~fh2a/YDrӇʹͦ.7ۦZhbTҦL)6X\[]T3萾im?$䅻W_ XuK0~e8[pQ&X^+-@z$w^kEYe]3n/VS MOc|b- u^q`ڊ'(]>Zq'eE/`) 9 /gOV|A/WoFxHs"*&!:kܯF㾞V pB-bYzd߫0qv=&%&ict$, %aՐ ^;-XFDPɤϚU.B:LkH 満&!\ù7d(eo Qr{ Ps7}ZS˵ZFc <5D0$8& j@*`)cmm~~ugl - לd>N]yknNy (@yD=cȱ#8,c>#M]P-6y[I [UK%JnI-TFmQ6Zi65e5д=˅;¹X.EdnjuC(SiS[妭iն5JZdԘXFژ=9Gz/'Y[u<ӗfV[߶7-~riE%BbV_[ tRV+חZu٥9<6L(^xScnVQ 06E׈qzE?~K&aa\>I-Gk5:v^[Unb,Gxhvtzd%,J Meg ۾IQQ)cNtAJ*Fr>YjEr`dsR^Mb́iǟO8?N=+3tW\cqHSGÿٿ?7->&ly,$9dI -K}=W?%gѽW,ClϢ?eɍblg6y rj{GqNel!0Y`]8s\: 瓃|NΣ*9ꊯ3qӳP?a1X5Ȭ2wor$UV.{LAi9'`.$0e^|-%b}tj 6-ٸ$qӜ+%Z1v vn)퇿0Ikd-U7<ݢnG5uJuDE!dEUS@QL. DE7bULVmQudԺx~~m}}'M ߮ }ddS-WIˑKnD@9M@rԹ?wǿ|Ol>y:o|}$?[]#n{e ,-;缰{ŻAm 4IDB!b)xW޻];}||Žfw|6 ;ST;wS;=4皇f}6ZWv]z/ weH s;_,he%m-7j\m{nFj^궨1UyJO{T{t5`qlL Je"@6_E F!9'KV݂>֝;v!쥚N2C`̅6as+BxIՄŞAᧁvx&<w6h\'"-Rzr" /#dq9 ` ] x $C ^#uZ< wI'\'$ ztg(hK%%v!!9Ҁq3bh *եPe-kdVM %jS(J/Ӭxٸ.JLյqw%mKU6~B-թb%[jf%.V9kh$c]*pb KI_bSFKΕrcmE- &_iPL%ZdrI/rq%dǼUDmXRx| [ b}@q& _J.L=w@@A,5Eq>aEykz53^GhW`4gp:ϪQ^Sr-֔AQHG('I+Q80m`c&p0f'0A׀1{p%&0Hq"q( j\YOj&1e!Glx۴1|T_oN`JRtYVV*bbs4}V?Nķz}FCѐeYXxQkK?bܫ{_peQG1cUy4=//9ݪB0%c(oRrQLAөB' Cd$ w_DPay/JfB/@L]J 4eM 54Ms7ܦIN$ f lcfUO)'b\LW0 P{LRhz<\`| GSH(pB,*^-19qP.Dl491ω6Xzgɽ9_C(AJ)cK)e6UM7eY1B˲l2` -c"]<eN%A,s-+21jbjuYnuee)k*3X6H ^9 '޹a%UʀPUZ=& cUL P-=\=ߊ8T w:(E(g;% (a#NjTB*~ cZ+JH֭ߡ>>^.CumKC\M4Ċ r-Fh&6;o޼l^_oׯ~mH o<Ɛ?;Ry܄RnUssRnzRj?Ku/N`+!7j)/v6jꥯ*`S':j_$+e 2DQ^<YX5d% `Khg| fc)/ Lc9-PjK8zҁR0>s@ jblKJJƺ*mK+T4[:0}nδ1Iov,L7Km$m)pNfhDi12IPma<678X^(Fir%kϘw~&1Z\gLK8Fkegtjʼx|aX=|(r43=&8[s8rpop9.sWO~1,F!Fr?'GHP901&u}'S/='~38O 34JRDh g'm]ɪ7}✡H@y`ΌӥY3{kG983p8/eq)>xQ@ ʟ7ןbKx~1y'?-uwRw_d1!C&z+MHo~}Xqi4ߵnNtMk ZHJd JA +va5ZtfWo~(A\cLJ#KXA(X==|hHm mZlj;=LlN(/IC}=0V}&-dj"ɭs7p7Kmo7/ݻNF0.*waok]},)c/5!{_L%ڤU6̪|[ /9Sxb\ 5/JTT̴WF.*!twAB$[#s\k1`ށ _'چ,t u3+UR]yلIazd34GHasq^5Џ=xxύ׉ѠHuk_oo J[erc*T'6q4 X\0('K8Ŵ -ϵXfړSjD tR&*7yԳB*tڢ bݾoo?/l;PV#M}y>Ĭ՟V_Rk? k"J_#Z5_6[~ϫ-[ `k!߾5Ww5L?|eoIj.Ikp?1m;D>7ɘ$ Atr_?wa:{9o{ҩ)Pvs^NRt(s mAw,}Z]G<(@\hXhfFLٞu?4 QZ`j EGvzkf͠3MÇ>wMb$ᛎDTK4= P58VƄ !:'$F5:Ol1E-!ڣaY,y AVyc9qOfJIb6 `_jD4hv9{kG9ԋM !.@DWw //fӂ_l  G惭OZ;Ӹ矇2? %s ~ЇPY%3w$ֱ"GC)F?*c`[H}(c[v@9^TRY 8VȚ¢ _ Ks'I%D7Ap>vr3ڧ򊐕 ~mn7Ѹ \5A bAmfYep+DúQWb Aq3˝7u9C)OT9Ml!3A3b f$$Ӕu$#hSP%lm x>i:34GqՉ8gZ!jڴ 1uJK;#M[ G#2j - -;Bgq-ྏuENsm;6[K[W(5o o;o;n^P]},Ժ*USF H[-̂iװٿ[}~ZnvqwqWo|ݬ7t?Lw---.]_O(HN߷7C[Cj_@90W/Ф**𶥆RZh+vUe-cFfPh8>&UJvΩ2Ξ`}\btf[]e= ]"fWFkvU 5d̄q.QLi5TL) c]c Ӟ!%U׎4FJMrk%yX?|n$7#.EOuw޶\D_~ٙc$m` duw*\ SK#SElbFfTam i)AQ oDq6T^S9WE0`v]-1p܇E}uNgq5`_\ İB)-J|X8*rN+JS 'Q@:fu/R&4!(ңh+@#7gh9 2JR'7m1TJ:U%b -KKoJJ7k&RrɹVå~L@(kBڊ'5MoSÀ~~ PN(-Y6uHadD%L iD4(U%;6T6TsR y6DmFƕL~X;\qdY͹7_n?4ZdڱV*Qٲ|Yo(hi/Kއ0Ѐ8'CU{L7$tDT+%F-ĒeTsT ]fx"ÆsR*Z[ui"̥BӖ{ƒbO-4q9m75Ps^.4ћ]r#͍.-@;ػq$W2}$6T =[B|ELO;mB /)iPue@wJ)^0HF#(%W3Iwz!hv?o6Uܬ7^ ۻf||.buG1jmBk -meLUJ{v|q[_.[\^G|[NXݚ8wE<:Ũ+HM2Z]5CxxV{s 톄=G۵Gϵ>ϏhfK o=10*l~, A u}#7~JB;u6;$1 毭4>컢k K-1.켤VB5ԍ!,` N.5LksZLx,5do1̦'yb=AEbp' vbJVz'$t{_1?l6$A7ә^d@UoT˽%uLj߱[YE3|ðў}I Tn"e5iȕhN!zFZ7Q3Сu EuBc:wq&!oͺ':Z:4U4@~{|b8nN7|[Gi!i~mѺա!WuǍ/}9fD+=`?翶 LZ؊LGdm}0xx>7ĀoynӅ١Fx y4^TlXlO(}Lvn E;Wm7遻m.fFrZG#9&f:V*|bl\l?hE,,4VuNps{٫bD"Jam pI$RiSIo$5\G.H 2%^7Wᥣ[ i7wTx_g7ӲӅW6\.2$ytqt_,q P^ÌQ~f&zrx@.= ШRe `|lYr9~ȝ -:]ӲK".IJ a~l%yRh!?[|+)9ʞ|н 镯'@znR0H2+^Ox~يll[!m&OD%#-py;M]^wwNK1c)')RV`UƗ.X \1\5)D@_ p z1.C:tkj عQ !\X)8)2@KNN,̖<FF9* C@^Na@ K!`4wIPfؖY K±  %qL)3RNx,>9ˆ c&!L(b c lS(3LF#BΨܨyAwaV#!-el:'e' eJ")BPL @RJUlЏC`$\sks؊%柒4cNJ_Ebj'wf@SE-[$'3ڻЩ"PJ@l%rג޳/E `]e%Rږ $^^ӱ9JNLJ-Wry~?FOmP[>YW=.'c#Q^j.vV5j>Z|,b<BL$ Ye #L,I$) 1)&8@BfG9~?h5~~eG՜"a[hюisffĬ%bt>*XДgbb!.Tk G2p!J7>^0TtB, F ycf1< #|)G~0R%aEb̉ʨfq&HJ `A6:E% `o#blԕQ xQ,Zjxy!1W D Rre 0%II, bIJY,m7rbSR!mWQLsg(9- za} LHaΎAl$\ ϼO a {6lRK=pJ. x6ںe h{c0{R[ja1 ngx#K 1ȁCo'@}<<8Ϭy Eܜc$fǨmA3+(L;KS7z- ם8n+ն#cK(nxuk?|t,ឌRQcvn&C{' ﭮu}nƺ^u q .݅;*|!{tWm1o -^#Ő Zo0aRl _7|ZRv>Nn%eȱ%er*S~>n ~ǺuT%neU[r*Ss}qX7^X@ T'^֭SäE\ պՠ!WG:Eu^cZ2w6;O9lǀ!L Y!T|ܶ{fmꇉVքbOl3?pVwj1#WsN>C1p%~'ݝw¾8pwr*Sujx!C~Ǻuܚu hА+W *1vb$(\Vr [go)]3 /twAC\E+)^3Z,tW NV):[/ҚۇhE,2%* op&,doQWMňD: *I"Ҙ 'Y{g߼g\OVU0s5\ bCbKSiDTEt$3 "4RdH,xgY_TD$8e@jVFc2{jtHb׊qAT pQ1)O"\OJ~_T'o/ߌ%z~g*pHQr\j<"ԕGeDKAN̞x})d=~=&m9U;ug=͕ O1-Չt᥽ƕ֒7`.F!7nyhjw{yYl62i(6ۀns_/B|i]/\nm.å\o%pmw9{ˋ )1]J?:o*[}avN<:Ũ)UևQ}}}*AKe{W[?:o 69=`Y֙eMy`M w?Q5璷i D08m{cՇG~02k#Jd)KS 8i (T%e&$xxr89tnF EM&ۃ@y&?+cO55g ۔OmpbU.V6m 'n\N& kJX9Z_UEV#ωWɟ[j4>+[٘KO'Kq$C;̰ g x>mx{ u|JlĽt(,?ܹ({0{7sʦ&͡175ko&i 37Wqo.Wt KM˛U6mc`KH+zsv(iz6O*uwq%Xܣ@^}/u)amE/;@Id?{Wܸ /1E-OL=/Qejk'o@@@B;B֗YYYYz}! jS[KޝT6cH걪m0rfr]3=ݬ]T)A{IL"I3 /ևek[${J(x#s$SMY K?ZOEES5{J1֔O˧n;-Ҵ $iu϶L zS-v%_SKzn)Kcj'"X4$,DapIs XM<0] b~7| a Wm(CIp@Ч1ea@ D>+GRXin2RVD-&(|}0W+p$˧;"NC%`}j;ԃRI9]d&%2maO~~pyyYs7t~CKV.yIt̷;û!!aA(Lt;6q?xxRBz91v1vKXfbQǦ6xE-Bpf~4E7gL\0cH @i+o>h6b+f?ӧ΁N|:'k`*kxWՋF*/ p*<"|@fm]yҞk^(m4|1Yi;Y.5xeƽ|P*g'pC%[U3\Tj*+-GqVCc[A3DQGM(erCs6r $GyC~n !"S}SB|G_V-F`2B{󗉏ނN"7f٠I^fL0P{?+|0r`CV.C>D"E @_8" D>#܇1 >K[t+/rYhᘪB/i1 &Q@%?! #_)c"`QArw;8Rw 2'y cK $%B椧J|"D QT1_ rFa\@.cY1BJ]%Ĩmڂ [͍И IOc9Wޝz}Tgmlb dvlcbz*]-b:|i;|> .͓Ny8l?v5j]N8qzWdI0L/NQ! AEE"P 0 /5Q̕ i 9"SmF\2Rp~Z1syz_8/#51ĻyB0[eh0_xljAh0"A%o=G#^(/#Aџcel v.aJNwKkw]J]wtԚ)ԵXMffqG@`{")mX]OOD~jm+^t1kv /'o7xwAQfxH[:n\(Ln*RCLOU4&Hj2tѰyvO9,5B)sz2k<9m / m!UӏoCQGi$>-jD)c6,["kiR\h{σc~όwޏ~ "c msO|2qt["-ڰcrڹK@+B҄˖(RiL9G/7 >v}iZqij0+Ҁ .)rk>(w^'S- ę*7Tgoh0A̹F(dp wLT[{Bl5u0M4:=hԍf9/viki L)?>%ߑRn&tkjJ{Cyc\qq(tCxݞӿVf<9;?G5ϰ2=;s$!\DKTXS-D3[Y lD;hyp @mڭ|vs!!\DD8h78ov+}Gv]ɅMc\H;2{DKzFgcF> og1錧&n|c &EM]ş 7&bY{kZ-(I̦ŗ kfKE_StJ` *1%i~Jh6@I=$18=z>5TH,BDh~PHbJTʌ LjHƁ t!krs|5׃-b9"00͵.3Yi c+`ϋm^I q*j EZ`~ߙ='SwMnk7Yn_џ&0gOF>csl8!|ٻ|A6qG?l໯3c=$dn\%䂢 :(BAyyS LEB1ə7TAx @aJ1 b4 0ŢeX ʴ;(DuiߥLVa_¶k`-K7BJQ}K5!u+.[J R5Rz]1ҋRji)3Z)SJO VJ/XJ1)5%6AJS}K5n;)E6~jJEK)vRҳCT߿ZvRJR1k.=D.1JeK)RԓTBlUm9/ݬqAQ'nabN9EV}ieBӣP]MtJj0Z? 'ZOC5 .v}U.}O6!o"М͞g]|5^TW?2e<}utf jg|%8pПP̻oh4ܙە=?\/u0MFGKX W/}AT䅾@Ѽ` PϾ@TIql\x^<ҺdH]nYaR)NJM(9Z4vf%o=ět~oc,^|vU779aV}<ͣnJmC]QQw/؝gϳVw85NyɀjJ]oDLۼ5ik.;迮9z96ϡ qqds&6W$L^ :1+b.c,lҒs&J#h@FEQ;BQJQB;oRԙV)JZTVf$UQ>իR$rEΑU%08RgLL'9W=jMtc) Om)I#Dè+Pi}i 2K@s@ ~$i$J\bH5 [zS Onke-4{sCJM.c.V V sPS}K5-Mt5)MGn՗뇏4JHM-u7䉌NµRR60k:5٪ZNҩzWtУ[d)j):>Ei6Gä,Βd3ݾK1s0$rؒI&vאbiq]NMgsk̚[5.7 ϹLAYo5LIW6LI&}aMjݔl191Hn6<Tk,*\1[&0tLu//%" Ɓ xB7|R׮ !ZJP (b(P-ΡͽWÙ+?Oً񛸘cVVAb/"J8|n4RPk9јx6L;/zo'>`iW[ %O8^73/cIy; Fu1CK2hn9sFTPRW: 3h1P3[#_(+81 /zQ6Di?|,BY"@xSN/Zv~HK3\"%XJN- $8+@ɋO\0RBqBǸVbI^Q0P. ..p w~,E.?bK!4UD͏X!$fXРyABMyE:7p+>dN*ҊDn4*ݖ 55nX}2g8F;.CmS_/ǧ!PէPÙ&Xrsh+܌]BVG˄KwM.$l~ M<*þHTx_:}1IUe;_T64/ 7ۡ5v ɼ pfuFfN[&7UFzM14{hؾ]Uc垓ߙw!g@Tb v4{wɊN/Mx}~MSU8XaS/#/j$CS}| H6Yw!\&d?irvAۘ In7QWaSуjPo`!/h6- rZ.Ȉ!60ltv{ z+6kq?LGз7]2 (IWrqsR yo+qmU>T46UȹlX&&h M+sq'oLir% 6"splX53hW]`q7*u283܈:*WV0 yMw.@3PH \l]+b6Y[%q5G5uߢ,8gnŀ*r!!MWxQ5 %:}+;0(8{z핞pYį!>!]51YםUŐ)YםM5Ҳth sD$216*o#5%h$ gI$8PaI AV$t)"| ?$d?W.w]$ VLؚRJ`KiIf)V|פyqax}+A2G\5Lc1f􊈃b{D {BH8ȹhcZ*3l\gz~w(g9JWПqW4y4e?QFdEM(z#$&HT,e$G^:9f6r ;mF8PjJj  (DQ$@lfw،p8ۓp 8pboȞw\h DDwe|I5/K<È}2P]DF%j'H-ͥjncx$6NU'AIdaar,ilago9d$fl}|Q(m( [Sb@xn{G99[ )nBxM¿-FmkE::7z к#n.nNH,{WiqѸOgǿ-y z^cN!J9cF!eӕ#g 3'T+?4:++L8nK8!lk&by@Ha.2ϘyǙuN\ls'93"[)l-Gh.#lܯusʀ} Z~e9› LJa3V !3w֦!mzYRE_l̇{WQ# O7հ*uia].Khv[Q} ,ƒ{%*AFi aA )ʣMB6|0:BőWXEŸ噡ZdJ@s<WF.FOY;exhn_/v}bSDT/+8%kԐ6ǭ]2vjE0*!LV8!۪KY!0s7QhIZ zJb~?(g$l5pHw:TrW~ jwie+zpJmWd@-,4uNR0)(B~%s >1b%EG=tJFڋ_f֗c> -_s|ɭ_w|y>Nc Z5~8rrWe螦UXVG' m(;Ωt 0b㝂?h&4neHRނ?7n0g`?oYV?ڻN{?};@3Ș1>s2ǼR>`~=[O8ke\>y򾝂4;Eq#R:xcTyAH)6"p;hK5)$?5=!N[`;(I^YId)=?m+_>zH?!ZUSVe7i:bWUT't,.Tbi+?wc43N5B5u~JGYyS95%~ ͅC_ x& i&N>Ycd3υ7;9^H F,qN87^@ ePpIrDZQkyPbS1j&o'`ٵ]_LSäDo'?~Mb6e//}ag՞eZo(wJ)Ue=9ux VGj BȱĜ 9Ӕks+CܓX፤` DNǎ* t"B=21LB\;QQw5&RXY$V++SZTVbOaׄg<8dqn%Y!s@;F(Rȍ )nv'+0V`,ZY L͓yΙ5ʩW2Z5"Ab3i{=`Vެc4ndrZIU8Ҙ0WE 0,c,L9F>h -I r2^8QX[A=X畗`xd4.%옩4g#}M3<Yhd>ȊA `u5j4Eqf$ox av闘}T90!CB2&SJ d $/įa,V9s{lTFe]W PbL D߳77οC y!/"\KgŅTz XeX(nNS JP00} s/He, , pER"Y H-0I5d1E~`&Dд9F*I /^U+TkR< nJv %^r.]}7vVJ"d#7kXվr[ J0\Ue۸_Z4n#ߘ̐<5-f$6&Xkfl+><%чa afU`L)B,,0X!.P3цrv5jD \I*xVTYK)5 8a +d1gH3 'g{/&iD@eJ3T x%RkRcf'Hr:ВFT/ԆcyPyQlNB`9 H \)aLֱ+Hk<_N`6\4@eTQ޳cS i.#}g~}7"K?5KsL?f[0"|{_(?"GO'W~GCqx )9|ImG@"sZM^k+ض`ҚnG48BH-w?$A%+8 .[ $e/M?7&udxN^5y&4l.1 y)nkeaWZĉ-' sSSvW(ǻ^t'd6L)e|QR%dmh R "(<~:C%]6>lj9d'g6{WJx< 3E)LNRmlV2\Gx4)a;5 Fw'WinD<)X $q &I,b"҃9flEv%WG L;͡l 9ĐWs0|!-^=ƽHs{NՊWƥ~0F*a%Ë Wj;1yw@"{_)U/Lđ.;)9ﲸo8bEx Bsڛ6Z0Ҫ30;WH(ZHcx'8Sc=b ˳X2STUyڻ؉"G_5=+l;<syI78هӈq_sIKlxd&Ӹ'ASx1T zWB)]z(Ȅ&ov&bjI7/,`%ђNIw?©\oX\5FTNT(M.+i7Qiή9{l/e$71ѽvB_\~US ?l2 9]/aZoWQ|X;FcSSms?x*he[ѿsL.fVRx|:ڜ憐c_\?one|ZHk=x/[WuuLSΆe%csF 0 (L*8sKlmҳ;xS"=sB\hF|0is ؾX'8VwĺX AY@A ̂%r9N}%~v H{gO M]-b!=B~~6~R\ç ;s_aB F݄0=(ﺔKؤqs/0teU1WOwd4UZT.JV=RƗ/(0Zc(ٳ , bsjIC;L*/Xpw#0eEP{+]źd[gk֏{*j8F"♊jg[ Ʒ6oEo1Kր"* Fq hyNûCDOzxs;o=J~\5n+϶˻jeu[E9#I^v&> v8+, F$AjdhՍÄCEYߗ]IXy~32w~f _ ϳT0c0ד_NO3sKQ/`݊ݙIaK߼6!1ElY&f%N5{9Ǥch95^#pau=u` c} Qj-Sz9|JnwdecZ.W,5JSj÷8;_AXJ~VXjY&[ ? 0y9 TB|>T牲Sꀱ,4BF#Hu*S?5jJHb$ ͌Vl]jd!5GBr MyR5Pn@ɩLLUj(NcRpYJVqE%ɈͧIܧ,M9*ɥ%VQ 1mFUe ]b-"ONY 5*|=W xcz)B#R6xfAr[#.*H9sԍUjUv6ZpjCӚ^^:> ]e=$ oop itsOgi;q;N[IgN) #pwwp2U6ecbw{1i֦Y0/Ğǿr%R>$ø影؂PN)22_0_||{ _dGq76 kN\cl)wћ^J&/miTOnvōXy\-r.չ<u5T$}u#2V=`NުN^916Lp~ԗx?`ۦjU'-sEM/VSpu@6چލ(opH""WXdn{."%S-CG[ k6 SK*Rq ]5Ÿ?Ʃ^^ǠT=cpJ\叱t\/;2WRR1S?JYcpp7vz3>3pv0YxH)Ub>/wu=&S:գm n?Fjs#~财!P 3[ӠVp3MߜTVt_# KH+XA֚P;:1kS$ZkEϸyAdPd~0•SFc8=QTr;t- G[HɄ|wu>-]U+8PʋE + 3"Q$ǣ^+Mtӑ 6GQ~0|h<C"VÀqO艅KcLо=z%(<[Lz}41/^ I{ֲ;Hm` o+ZQ)!Ѡ­$c?H/BG@ֶ?*T5> z3ׇ&N4JE՛}T= P-zIUh3j]X8vg;k/]Ɵ>B) _?lv{ 9C?Cpb#'%-ޕ4VH[ynnqV{&un8##~J(D5`n*w?>;燻<3d_ ]hz͈_Wj9l[%|sw=s7=!-Yۏe1(Dr蹖:+d6|$ʕϛh*ۯ}xh,`)=X-|Y5R {Ie NRk0!3L&oo}8C9HvADG cF#RDSy:kkp,+@5iO90a4y| gN;5BǦkO=]i.eW`AcyؑCpy<2YdQJ\4 }L-Uc"Na]S%Cj*Wx@q-Iyg^gv4Ύ <淋l~~фœ>1!x]z%^zX=i}0``] }'rzdmzLz<;yзrʥWD(KQߗHм&5~܎>N샃egTlxٌO֧)NgomH'va]\(Zm<x|1yvls {}N>fQo9vS̆|Qj+aE.Cm$G ?L{ݗu!`\S2a\Zh`%φY}s1h>/ߗ==p ~œd>-2ʱLiu+ӟïXT{" aR~~h+W,bxWexuĠdޣu۝WJ=oj:АWY:AjQ:L:N1TׄL(Kr5 TrO[PX߭C \yߎT㙎r{1'T@9Rp+e޻:Wՠ:uwRX[tv&v:R~,5Žyӵg;ܓ7uĪ+lQӡ e:|f49}f3 IzB~-D uSCU3v܂nib G_c/&]g<[Ӡc!ׄs[W '::1_M,9Ҙ Lxsٙ"uXwsM(arykcHg3X# FeB|D<E10âTj z*I9`T`YN\cl)w>A(SM%3Abpb1Y͜XGΣ IŇ΀O!0NhWeC9 љTkR)&W+|3!\Ǎ!,0w,&RA"0H0`hHڀQ1ǐA,n<=HQ"8 WFQmAMaDcq4Y5w/?(Wȇ3r_.&ce?QѦ*G7?Lݞ§K<>|%h)Bܗx;[?_\! SϣeAD_~~;īdEgЛO;~ͬWDa4'WLP&$N=9E\`)MPf>O8($D,.P{ \|Y+ +<f8Xq6|} z˒¿}Ҳ4lԉRuN#Rކ5R/?+r8rѕSoi 8UA`LG<` N*BQ:5z?x>>7T=2=Ƚћ UhҶfX&b +}ݼ~GHl:ť%ؾZg.);puH2}I$Ux$o$kJ?lS:#/ƪW>\x:mx |2ec%S>q߷B2ɔB?}-$#L R[ZXbl!-h$MP#AsOL+¼h2@$pccaZf=q=s)@U r$*rM;K7ipPtNxT/"$fNU5 m(共]ϭc 뀐ˢ@Q0,ropUO$8F0i"9.(Fp+DB  YU`x FR1d!>c:7Y@iKҮ (c_xB -V0rЧ<¤ʉWpB4(g4'ƚL"c`<2H`=$b/7R{eF1FE~oc͇gN{k^AM\ U+4cۍ\,xTbjjYi6q/uFQʒCF9˜by#WJ18hiu xTtSs0#'1‡е80Ohyø*NNN9ʕP䘖"cJm('kߡ'D M(`7% /]vD‚6 *q VFhtp3C(.\qwyφ^Vee3/~ᙋ$$,7ac`IL[ T[Y<*X` L,XhUA,&%aXEGV`ޕdEЗ̴Ơ7l_& GVtiKDJxt%Q{qK1}Y%*kwRA[&'yeq,crFȕ=& 7Ϧ F4 y nQ{ib!Nџ,Gd]+m!j)C 9 \'gDu2<#l;xF`4':Dž=gt*ʈ&<Z8(ba0A5pa S7#qS< 8VtY',LQ) h:O<4pAS42$ D4RT"&BTH' #N'1"7b4\!:S,~uLV[TDw JB%* n`5T,M28FB VT^M=Ԟa3NĐ B:qʃ$ H"ϱm2,c;$c\9#4 !|̙[=eMΑ?f q8E:6|k~腿sx=VK?~G՗ZJ@n.z CU4n0 0#oF&uh~2=&=Di,;W+f@ P@,'@#]kz^hlynz(1"(qȮC]SPCF2%`!b>B ނ`]@슍Ȁ.p}Ύ^(C}G o"FTDs`l#ɹhH(I. 93ئ*6>9)xqu AONPζmJQj=e 'ӫ)G}dwf!ʓg46筁rTU;'$:'v% ʩӄrm%e9=2j׶K&p ׶] Qڕ lrx2E%Lʆj\ͱj9b*mZD$JLnjJ+/ɜ.&s)j(OI*@9eY礜*kF^/QyNmDHD\}ۺt g9TJxZ`oGu^5#Cv"oSLWS*.Trl~Ԙ?O'vm٘G{Cۉͼ]W cEemׄog(۾8{"µ! oDK/FθG{!v\Bxq?("aX|Os Kg39|Otd1ݏWT?ލ_ ,QM%h&#wkb^ @S'p.ʱsrKK"[ƭ][i{w{44B*1Ntm8V&>XE~lڛB,nhH/!G=NYh rL U+ LUB!Omp4Aĭ[6o_?36tm%*(!kǚ̵M/4jfbZW;|>v*hHO&cw'w,+g|@Y_}j\$>ҿ#g[z QBs o XM`D'AhAw9γKՌPU0BE{ݪZ/FdMڝlv{_F+p&qMsNXH2NoosZki!v)S o dh]a6?`T]VmXb -ʙlEdO0=N'2q>[|,'DܚZH,䉧Y2=:!0_W<`1f W=* 2&7ުՊjt,T) /3jfcxq;g%VZ=/f1 Y㿘+?.Ъڒ5\[_٦ H6$q%VN5zQJCm7Esr1`\nesn=Ui.X|)6hJTDE^95PJA7 <~Zpo?AS$>˔xp943֣wQypxZ{8y#զaIPC6%hB:$FD#UJU#;xhGF<- ̑ԏJ4"|??EMo8SzB+H{V$ח3 ^71z䌰mSQ Ώ?U%jpaxxn&巁?Z<0NSmi!Zu\giͻ9ZQ#;B:ߒw{ȹqv3Br%j[r= U/"F]#[k ,Llm+gLZC<S۶1+n,غGP`HdQkԻ ]5߫U q2jk!ljb^].^b5ڒ0ٖJ kJH*HCFcB7w~$:B]{(v KhE34I_{geY?\imnj[WpCY|L.S'p5wk|$<#S唻r*p ӎt".T1۞۰Y6>qRQJV ٧HPףΣF#La(ub*a;8Qw.GSp0{53xr(8>Ոh䢀`ӎ(A^=DNĐaLuD5zw?ݕRJ%O۫j.dm31?D "%3.m"僙yFKDIRGJ0v -W 1x ]h,njK )[Pܞk!nM1B }$ˠAo<9yG.~c`4`"SW|'Юxb "AL\! Grf<|m3ogAe `wIgގS]9Cmq͍%<8|6C<(o ;)k^ =3Uz e/ӄaFOUx/Մ[fO.;]+x3\*&(ǗLP0GnmM (=&sQBCq1wXMyXjKB6x<>&D 3=(VbN6rHo1̮J;~In(*,V2-Ud K@j=0s=na.ShOj _fT`ϛR0cA81{P3 aj+l)Qu `,lW*'fj6(_?}s޷%HTet~|sA.ӇnxDӄw~0-Ϡ|ஓ9x눈"dno>"\>3~>|ѿBѿwNsT;g>SFz٣MnFCA{oP1eP!ZOE☾'m  P)2 zRsD+!#`~C]Pk@Q*2|[ԀmL*girmʠ?QA ʖ ^R &<5iRfr-,yBJ2|ȏcrpq/pFFa/v4:,7޿,6M: #Ϭ_\d_<>XzIypYg0`|_>]bE)<0WmHr~5)@s((lP cJD\1>!mAR!gj,Z4M ]{b5JY[ r|mDZ;+Dm;8*ZE|Os<ڙ+LͰ~:%`x8fnWpe)-6<|/?ΙM{әÅ|@FcYS]2+A5FJv(GJ;MpR.=GC%.B=DH+JgjztMyGZ3YZ[UZ1JLj֊|>LDŹW==jLDuVTlUTK,6Tbkp*X` jl%p;=1vԐ{ 7-a*Ѓl5Alqw&ZRIm^荲)tmg&ނ(?\{(۫GHGǭ}L@h ?f3%t*AEXzbQY&X/ΫHBs-%SmڪwEDB 蔽GvF;KŪ1V>ߕSU!!߹D>J^W;4^E b.6@Y2VSY8&:^HqwKcd L~o E֏utKcoozx%h KžMH~>bVI Z?Bqp8\O&bFrYQzh:&yA9Kņg^hÔyyo\Tq~o\,EqFxϾ@[[; ݶ!+~!qhj`C^?{WG&%2x.vܞ}؇nlc1/<-ٺn}U%)U<+5 T,2`0G-57W ZQz~+{љVZc{=̲5x>^LGp{"Q$LXs.Vz` {]cVٗ8/ig$O 4mU vypq%W3`#"W7g9uJG̑z{z|v~R^Ka8;Ojc*z'+!B,fmHeT0yrS>APh{)0e\]\껋hzvs9+SʩVoOŹ;]/}qѫu IׁYbE6\T&Pi"Td4jR0ac% 7)g#'?*>w'wńs7@ŭl?O:thJi{m]RҨ6r,axJZze4(X4RJ;5VyFhi:V֞*Y[hA!ZpBQeɖ 48rqy潦DQI,,f`ZHy6Mh4F0zlI! z@kMBvHEzqǤSnMaaNNZJ,ڋٷ̭ DQBzځ:'9:'שL SŖ)ڈJg -=jXU/Цd>G#)e\xL@%q*NW\`|̛ic0ěB3{P!SCVZp8MpQ哽/y]*7*; 4A5՟GHbUXАu~QH@Σ+wO}eu~c}r!^ڛ٨vp))!FV}3V5 9d8 :+ rV]½1A~6S'j +:\={?a՚ܯgT&.Zkf*nfnQRf[ .mf"f 8J`D; \ Ie (IJ=I' ]D05CNծJM%{7w3jw9we~J?kzm J oyܩ)iU`gz6#N*Op['.ҥs{^!6mFzivf2aнAx6y8\ 4bjj3luѐn>8:qtuLy~1v|6 P[7itv=;R\qgWЁfG6 yt{Jօh)㽚)#x;3Rr0g堏Iwi8J9PK#*>b MT*K7<1%'^`C ' _VNPf"^d_/tJf X8 8^Xgk". ("BCX"}ԚB]FZ&Tt.$?A3HȀsW櫹p#}"/Β6E=0|~fgq/݊TTTUqheQ:*!pe(B; `рkeTKu`ୱ>'HM /6LiP2HA O&zIZ)b6C{Q h+C} =UԆWGEsV*&@xN& UHS˔ JURBL GҊ$T&qLDиR⨼CB9HԄtkcRo>}*9uYm a^&l%=lhZԣok7'^K%~K:#MiܕO@O3P(5 $,g#NAohuBQl Em'i;.SPmh/Kw8~U5:шcRfyN 0^0X]6%W6%_ڕ k(XT)Vb!S*w}s)"(zS"r K 5jBSsg*T_7E}lot[c{_.?\-ˮ vS/i%mpYE 5xtF(?WGd߀2(h7U;4<հY5gup_=|'\NO>OCƌίY#'n(nJRmˆwmP%ha9EYcʐk3Ix7x{qO݋r!ؙ *5+r7 _["E'n@H{?!CA]惂û3s{PtnA1tdOe[p.bhO=| ^25/Ri9DfUmbפT+ΕCXKk?oj{}TŁ"$UhuG;mupӞN&70j 9{tN"4J< i(0w,i>d;81dC9y D# ^p"x1YVɓ.j+s.\F0>yp,:8_GiwM-{6.K'o/xp a3\.nnX37]Y8SWq_}A{\,?e\,uPYh 6nzD=r\*S 34%i6^TNx2rRVR*(tʥJk͒ (d,*Ǖ(Lz);]qG{&AV @ N-|B8+OA<H&6 2ߢ@G{EVd*NTN>K15?&vT"HPQaZP`%y:OBxҤHgDmk1Kև\P.a굳_eN9 8 9NuY%jpJw731F!"֨ ɴ潴:bqͼf~%ceCAYRh{)Ux02"IҚ`-4eyxKTEa-0}-QD}Xb2M%Sŭ-ɉ8k}T,?J*4tRڼE3DW&eLjMF/ebq4 26Ů3s'#o\ΌVѬQ$' ޒz|h衆ǁ|gCb˨^AѼ xt/p2h8'e+h5PXj_ rZ|Ӷw[4BQBqBrCͭ6TBM0ᄍZ˹ûPJU{2vB?+PA;hG{"`.S66*et.PAJiyDɨVq< $1ؐ &Qހ@-PJ45՚lT=/t½sb/ :9Jk+!)mԴ29ងP|L!(n8Mzer ;-1^u2H橄Ц S_#(ĵqg\I+1),a_'/T{ ٜvz" BhL"K-cӻ-]A>4YRCMOv,|SE\P; [zIL2b^#|MS6~\p'ƾ/6̳]J[Rxyo?Kw%_ŷnUiٕ6- w$؍ѝQٖA k u7#ͪLW]e]Cݾ!MK KG>Odg0GJ"䕇hTLyeS-'naOuvAH>IvexE)7!b -Љ}F6ƒwv?nݺW11wr -Љ}F6N/Tz}gx5EmS,ІV˼-״]J{M8tִF*WN8UNQOj^@P^D9-OU@;P'@՗pI}WQR*yhŏ\zU0z@Zu i\/7 ЯyBs}rxg: \`ir ڒb읊44US0b!lYjMZK1ew4UdȨ:DF ݗQPgQ_0{fO;B1\?v9%/Ia70fe*kc?S*l/꿙ϏYe+\0Y̵d U %\ܥ3"Qbu1<5ZBlw>Kߢ/ognwܻ{N>8|Kٿm kwE?|'>d`sZ:Nu|ᵩ 1ql./1,QsT@~#E+}Eň "qX&lQ9/TOB~'] [N{ĶZ +kSDڂ@٪O{>Ĕ0ݝ^/YO@yƤ!W%c[\C^`$aX| ,aU,>m>]aѿbyTJ]\ ld..yK[T.8O uK` 6q&E\e/sJOQ>R9]ۜN,mvo<㨜4CJώuФB,[= a; ־:a(|<Ū0_O­i}At۾,P3>9bWʅ& =wi[Ethƚc_4blFW9RtN1Wl}s?<瑈*35iR3wu20cqD{hNgcW.Ʈ\]çZ4WL C@C3x"nHZ9If @*e& ~`"Jqe7͸{uDZ &'Gt]}$Al0I[C]3 )5䟒&ͧ d8ղ,\!Z o.#Zy~GD's^JpG%L ldJ>$do!HNt,[O:\ȰdX;}9S;hFe-qrXmώL"gv b1dEx'(,qswWr`K}=dbF)X;A _v$ UȀ^~&:Tb%0(8Е;8hyh R)&Z'dʶة>%8U-P.Bӹ3y31W0ñʐ";yE_!|dX59- "=gr a ")vOh}G(F)?wm<]tM۽ 38S#kr$ėnOMMd7=۾"р{]eeM^2CGQӚ! {%1K3ǐg 3-"zf # .a</NKhL,!YnﴆU0=b@@JM R 6x F AT(8*aNsi%V0ZeĠ%5IGF;Yxmq#NWעpa/…ܡNkqŚ&Xa"(+B9X\dHx ;y/ y3ب8N8x4' 8Q8f4RLTN%I"il|} .gDLjwm/5r],1| X`9o3}XuSAVH$)%uV3f @}KL8! s> ~]XtHkX!d^ V=V{ Xc5 +n>ee87XY_1rf*<:ACh,E-1slaZnYtG>aL[=DZcfjj/7QaS uTuQS_ھaڨ& +>mar0XmFG>c'C.HcmךvpOWֿ=08& IUՌ!\E~Wd<{+E1tzDH#V_0YEsAwzd}ROݘ??Vlwgʢm~Eg,I8Y%3<#"A e0~)wr&V7;@SqVH<+I x4_P䐎6ٮQz^/wƈzL0(cW!ץ5᮸q;vb?yo?KubE˾0Y,n3]Ga`+{_of[g-w/<[U Vdr!vwkUZz{L[_COZ<,]|coz~/ lr]Q 즽5[,!&m~|!<٭y!S];$$RP=z='գV=j =6?0.mQ\yRDeuڮeo(ɶZSJ-Spf8p!G6?R7y:D ^CU)w$uvw4YV mPOC 1 3Fu~ ea>ka͙rz\Է+PRKϏKEyIT AKǥC} 5vyQZv\JfPa.=k.ҏKiVSr\JPkq1"72OY 7xCEt~di᯴]|[8J TqYAbq= kHQ B=2)ꑡTdD =,J΢֜;uz;:эll&q;,`a-:.>˶ӟp3kڶO.yNQ-w:Oc]t*6ޙZ{8J>ea͉Rl~J v0c+fR68 4Hr6g"`=Ƣ4Zĕ7t6宗 FRaS*Xk38[EgNkc6F|8,nvxgs/`dFs^ I&֮cE c!i#Mo/J83()!.n#Q*}Qg!rOt8."r`Ji7袂."ZU3,Vr6]7@Θ:].۰/MǫK\%\gl0~'(B72wo9Ƈ_!XEچu,Lcp rZ@DoNSt%/g#}E_Bzzs%HuϕXRXhZzmM wchm}5qbzW W'|m L &PXTR74v=SOD2\EVj:"!tD2̰K bX@g@;r" |3=;%0ea.u:L*N!TqoD_R؜7bǥX8.'h ~\e:U:土&OqKI33cY"C 3ce\11`%qb=@%A禮Y|d4{u0iS@-@ ݦplp`Gc! dԽ~5C,<|z}O_ڙv^?&Ⱥˋ>{%8|ϟLr>_Nvq J_?g,Ya(:kJpto8q0o&ASj4 Oz_o?;-3M>YS>z|$ &j4#DdMC R( |69tdJy{IvlnQ@\_'4.[ dA-?kԩqOn%dF*MKu _VXQIQjM>.jq=0˦7LM/%ʥ̞?z6|{vt~fuGvpW^>t6ndE u[z %v߾<9(߾H/4ؤ4}H~S(ˠz0we 'iRO{nrQ2`28 3d+G?2P0pp?vNg((},)׹.<7"]p˻A:ptt?->}3bk!۷S|Pxs}z))_sz്;Aa,X`uSTrtfq ߏ k)Mv@[ApW~߿8fYu 6MPRks}ֽY@),8N30ۋ;ՏK O$d~5N=$~.ccfeOXw5b<`x ASn1hl(2&(<ƊsI5W,:f&FRJ,B8'3>.YkjsEnay}cVAxzZs)hd'࡞pӎ\ɄSF LVJEID% !/F<:T^HQ>|pߧwΪ2p}Qh݊&=ynOXG* [r|`*x6ooAO`~H{~'[D3 ($x/u︻U8Hn?'%w@Li*l&;躩be %B҉>r&.h}Ml]I)]Ux0C ӌû~9zt* :VWR ;gB{3ѝ[zEH{VDf2Xafw16g@;k`Ya[DP`.m1JSWx+t쫳wN`F>j^$0-qB13q5~J7AlZ-"*P"~"R\vsѼ7E*U@`3=X\Oj0Dubrԃ6wd8> 3Sg103}ƕAUeGJ3)H,DrW KhDR񘻌< fRs2~6T&oukl셸Ud!,.*A.%\9bFQsCh0#k%?K@L+ @PH nf[0L>R_4{sS[g*3,Sc%S2c,+MC2̶%A\w{׿?ћi?vukx4+r8ERQ~D-H֭I%0ykx9Q>曡 sIV`sO.`qН֫cϟou+$&вDbjBlz0 +RԁeZfYH.|'k !KG S徯PmD …O^R6v_^fk&ⵙwE]nc?t᫊%sTeuGjg۬EeB*%;Cͤ;&W}; caF"+za5f (^|}_bXlzkvjMe?$# o_2ώ1EHRRxY^A1r݄h`@.D䩣R"d""5nK}t('✾Lwv=zE2Gnuq:uǨ"ݎwQᮄ&mzJ&8o΢xJWoWfLt֩;Fv#Ks8ݭ~tkC,3O15]qt±ӳ5{&`4>;CaѰ*:BOf~]RS0yu($ڇε;/&u(4|*Vh5%b5MN] ϩՏ_ݍC$FdV7W(18[t> @9$u? nI $3Zrv"E6})w)т.Tu0WHȆҏͳOSN 7KﳻMb,a,,;! FD} kHɬWA- GX'ML2U(gk2'J6Y#RQ$ԘaB8O&\\& Pl\k:BЦ30\}3ĬD#Q`&Af/e.h@_V0B!cic8Ts%yA?QH63p?)Y֋"ؾ [&;8i_G֡^*'"% 8B;Bb8F0*b-èM~P7 1/`BVFQ$(a aŀAPMtFXQ DjR_,BR &$2d+#BBc q0 %u\M83lv4NbRMbq`oB&+;ɲU(C-sPeaR;!`n?n~ e9Pnu/sY.Zo,2ikIYq?{sX${^H`_r`Pg<3>$YRݶ6%%`bTX,_HM=Qe{7b3EfQ&Wb.-4W&QMP Co=N.J{qbmR|X[}>v /Wݦ^+=mlPSܤ19"4Iׂ ),tU4hL鲒9SZJ[rEA oLd3%Sm* Pd'Q{)?RtYe!JJظ% WJb.љ*mpqgjjs+iJF:+ mPV̮R2RdqU 5ڊr`2Lc6Rk fc6`"N V)kuPVe 慩rX9t)ّ4X&Pce3s2ElYP)"~xv 3g].>)eWUoUpN;ǟw??Z~ rooGI>^/PΚڟO;ۻE~ B6F#>듏$=گ>OO_jIk‹>sjӓKexW٠Yk7OaaOOZ_M/HXk$DFa .ec>jX/ap+hE圉+y |̽xDFe`"psOo =bkOCDF/`B ?]dh!L6.ym^)ZJ*;#; PcB6TjŲwq`fGOu*WD'.a+KjI56OxKLE}t|@l2-x''lۇC<u:iH UMe [uqbᦼ V!WǴ7"=ZgIw S.h=!C4 S5y>~}!Хf~gR:eKOQ/B#:z(Cmj8l)IEgEb:zQzv]XKwOdzGSa^zsA^W㳏9zu\[D##6鑵hZ[;&}XWl4{ր/ 5IJf_ᗐ:'5@YvM][0|EEEE7g̔@J *DPH((2,-20ʒ؀|0&8ToZe|4 @e},?bJtU.EsEIUܱ f2cQ.0w3nJ x_h['j9{^*gO):?q>caJԈj9ƚe+?V9t\yE ̽/w"/,`ll fc6a$#u(z*]0Bh弗ˊJA^&'tQJwi[(35FF2Dʀ*2Lm%ld+pT 2mhBQd-z P@ϫܞW!b}:Ŝ'lcyB(, N?mPK`z^K1ƚuEUY!#E@Xkb [ݽ6E}$E9 !EK^w:[#V=2% dMz0J/&eܻ)u=ji?{g jx8dzx0RfziM )|5S{ҎTtypWhfӃ{b ƢLR|}"3pV3\ wA#̽cM!hPHwnTS!LEGth5d{Nԡx>Ik8 Zwr2pЈV[ȋV)> ׶* 9KJX/9&FbX RцXc۔*MEk7$LI5w('Ruon#F_ަj8^@h4M>|݌wKA tRƻ,`-=һ75LE413)Z.R7l$E3R1ͥPqXsj /Jzkgʛ򶶣웏.?gӬ+ :伴dy^A_/PpQ9P-Myo4s36YTl.>Y}2{ͥo;\Ͳoq";}PsǍ*e'h:M*f(jF3|La)Um|R94M[3Exۂ^:>-{~#54G x~գ5b.(7PrcֽrU||i8$iT^:0YD}jdR&epY(~]dZȮVП>*䁃7\\J]1}xw8rC oU[jnTco>7!ͻoa6#~>ϛZ[w>xv*ee~NGq c};}"X=[{视, $kڪW4R5TQI]iT%@&pQ˧M"^kVRFZƏ$S"'+JRIXCxץVMz>D8$P8R;PJL1( ͅ!;o?OWnVd<3ïpZp#"\yܾ`ĆH'2m)O MK_/%ҽ=g]U&fEn݌Le(s, k6f] ~Ξ8x.(f/(܌ꌫ>\1o28V;.Rc6CKeT?ICT#J+*8B A2TpYDbwI!SǵL:xI%s@ZUӹ(uɢ^d+lc8&uSx.-oR:#@) j#o[ٗ_T-em׮ދmuonMs[&yQ# 6ytV(0JC`_ȱGc}0`gI!Xm5ɍ)X$kǜiLÄe>/iYv-x"ۦXg^ܲliD:2L[@ZMqdi6ZN(|;潵;NְhӘ-q}ogGnv=EAĚ{#%XdِX!F @huFh(R* 0e2 #XdYRRTA,[.uYF*\fnF f$o Le OR,UɌA5I6ZHT_*YQZT:^ijBi3H2,D1uָ̔E-_BrN1c VmU A4\뎘shvٗnXV7NplSg'{=yu\Zsma51#j7 g&1zFUfYY3(P@x-gDHDctȤ[QwQA2ӥq 9S(͹3%*َJj_P\:Rz.р"f(/ *2~^U:WP1(2Wt҉-CܮM-,,d)kһ3&Mq{VJu:ԡĺ`-zGťgRT޼?Τi)Y+5N]3T-cS\|'丅j3:![e1\wN;;Y^ӶhgӘi^ӶPb헠6MW*Qޢwsʑr]c Q QsF+ܖCksh}gZP᫰*JlQdkfݥFTJ|B\?zbD\kҥR[b#9&np-U1RѦx!3X'[yMTWg4oƏRuon#^Ѐ++$#[yMjԘL5|X4:f{?Lߓ/ˬT޼O_i[BHغH)D87P$E$F,y, k^4~t3{*ߛZ$ wghڠdLb,X;|*,}`J%sacSWr,4^ K6^Z;R=6L"{' dsy7No|9,>FF{{Ed;}wlYz]J<.o8>}<~25!=eFݬO][|s:/¸]hx%'&'?ۧhivQcUL)<$bfVW,cAwCPٽ4R. `H l3ͿH `WЇ"̎ӤR9Op'_?Dv35 749wJUW3vJ(V%YDYxo4 ݕdUe?RCkVU6gU)Qy>B=G^ƒ ׵p"B^UZZg8%Q`S2C AU;Pxw(-cg,ћ>=ؐ|4R KrEO!QkLUI\f!A]Aӊ \~G.[+0VE8ݺH/!͇"mϒ֒Bi,[' YS.v -tC LIb‚f!Lq0N1ʮUŘAm1LhZ'Wk ?K}24ʦ^<}/;'i;2IZGB=w&ȍb{wѻ~9$'Rs[O'2ㄾbxD7O&ނݵjQ/է0|Q>O2ݩiuåQRAZ(_=>T҈^ӯfWkУB{Kkt7_N//+Tg Bݓ8cןc?.)a&~b/CzrFqNt3cǣ-4zm C"Jfُ1C{c.~6Q/QPrD͂^k6L/,{%{pg>%TKfSc89"5a[>5N2tD,YsS NTK"8)F*PNb Q@>DfAƵWU-I=&G >Br@pu-L8\limWVqDO_95B"A_O~יӽxA_GTfرqRKθ5\ ]2I {P)a|{HgVge^HԚw<qlDԲb\4*H0m_͏h0qSݏ ESwZ'qW$j]b*X^3b)!?˽BkJd6 81x4PEx4/T },4oI:(px<ܓhG&TDՔ׸ᄍ Utc9jr\ <0W*X0NW8|J88퍠5QDѝ<4v,F3ĶܓuO2!0'[U#݋P7j~{F'b'oFJ]d^ [WK024:Jk_ u4BG>乸R 8kGz hІd8ɸ9F[ y *E -x@8= /JD*_pϵ.r?*sz!֧uZi w] G C?I̘ =:p0p@#0*fBvWfHJC hpPzdMXf4yd@Jp#^ w4^]54.Hc4V$l  INSٷo{jVwcbJG6;l^7{nB8&IGfHe趲4Qhߤτ5f9YR8L4x;yj9.gg݌.\(Dgp``d 6= W\%} qqv_dƊ/*^' ʀf a@pqh$SNpݕ~~Z٬cjL'/=.٣иQe[?hH3+Ml_e䖞%l,/os&Ip;$Q], XȰBvSx>w*%'8K_FN9EsXXSLO6aLl]aVu%.Ec.=*7HwMTsXaQV7dw%.DqI͞ ݣ´οd;M_hwsȁq$U&'?y{ůӿw7j/ ]sofx5dX׍6-}3=o-\@hc$ɯ۹0[ `ހ/  Qz"4 s`g;d/Gt\9|gǢ8о ZH?0?1c|?n"L֒]+vKnORJy)Ěfr"0 HNe}aMe|NKT<S*S]^ⵐO:v6A5`l䷒^%k2?l@Z|Ѽ  6ӕb:{ !63K ck.j_HVH[Nfn쎃Kʛ3o#h G5DWѫijW$ݛ(kqP9K$΄hȣBBA8*-tD kk6y&֑{gQ^WS1K9ƾQcQ"5k π[ < <8[@;qTB`"keNvJYUWJ\}Luy*縷[;b{߄+b*)yd_+ah8iHdҶs.>E f 9L+IaI+&%mcIA/} 0Y[8p⼬ֲ4bV EQn6ERc!d]Ҽi5"Zr5;rz)DR> -$v81hYXݣw#hv.s0Ր5l>bH>V!\Ep "5"[)'a C4 L[,r@vmIiKs{m٥2ڲzROxݧqtq]FO+ګYOhFA2h}ZӭdtQAW< m m]')=<מ]^)ew6ZḿH#+<@:(:@j|~r#G8v='tD T$л>՚J{ ÌzDST;-\4')7/[d5M;iTVj]}?FuOy]uZ;sBVpۤ* VpGaKa1&Q\3wr,cwYa 11fX`hcfUNrg;g"48֝FH(LKpdBZ2!qt"P%B01#b:$UVJIO'lp$I-;AbdOFfzR”o;Ex>fPRzE%Zzxf7H'$LI@ilnE,B0.EBDXs%bH0<Ɖ+SmSaekՓ2լ].qmP JzSTOV\2JeOuD>)czR,P3 baҜbRPJP%JCiF5UF)YzRfgTS"RNPʩA)-=EL5W^J塹y.CC&4Փ2Ք(8X/2Q^Qz(%DD^Փ2_z(edѲ½@)'%bs G)gv(,˗R zR1QnՎo}+`KOS=)S(DfRRzRPQwKD)4J^ZR|\F)_JJ-R5QRmyBg+LԇLi'eҋF)!v(%Y&xaKbҌj G)4ҖfT3'RPʳ;nQʱ%x8AѶ5w=JK9 HXTH1)QLH#DÈ#-0 @:qT2f{796%sc.L_.|R4Q ubvٟ#T̃/޴m|R7 ddK,2,6vd vz}&r?|V/>%lfO`ϓh)PLt@ά QzB=x12 m^nІo dntk|WU\+{xqV{~b[&o6Hn9QKXҘ/ݶJ5b p?ǞB4w|Uh9-w<`ϵ~[hTmlqlfmM?b#Ϣv潟Lä‡}Ñl>3m*9 3=Gی[PIRbL144Ge#P9 Yg=F4:;J!,Ԍca284JZ$  a&8EZa!B=HMqni65tfQcso|~M9,ʹ/n͑IL$<"5+ߡVk28(Dn+fX'7 #^ :O5hsZ]͊7^m@<_=j58M Tb7?Yt=oL嫔֙8T +c]fr4ҨZDDTGpXaq!pZ&h,&K?_BJ|4b:iT%6Q9?~:S'mPٮLW+qmt5{1{f6yDJif%nbptX)BT&Ͻ7Y+ ~V B+Jɘ70:ޛUS%V77]\F(53.W*8:2߾q{\2ş/xpߓ4ff}.ZNMy!O^KܯIq!Iˆ4(MG1I`=Qb0tؖa%GEw~x|㙨(ޞlZ:xsPBjT/ONv꣆>Σ.cϵ&[n$ON\˂.b}SjXWd%{% * tmFWTlVTF$$ sX"`cLBNL \HX#Xip9X|(@}3s7 ?f 8 OM,L.UdL5ÔQJvɫ3 72^.WM[_H ڿ^j&(\o3A!=r79Sp0 Vo Չ1,c)1 msxuk=Ԋ D|6>Ho8.m?&B%:k& @5DhxRFp,}gLB4ѷ9H#/qjL)lc`6FXf{vx_.5Y Θ0TːaJρ-Wea+C^${GټM>\Fx6-z(Zg{ A'oI|s^}Ԑ@o6d$󨲭d-w\v0.~z A2kN.M5FX r3E-h/7G l),zewqr,!;jz(S ay|G<UW5^QR_ԡu t|b: @Y'gn7,[!)zqZ7QHuh|e`oXiVuk!9D=0%Z"t;~7K0'GͿ,,cRkU75+~󫠏bΚ}vwBZup҉)C=] UסMtӅfZ vڰi Ƥ{B #V~,"Kgf:g7[oSuj3M Џ?30*3 JTv0^_'aHBt倥4c(s_\OLL"o>OorN0k>[)|16Ced7ij\t1l?Cmb4<ٻ'6XðTR=/ocw[\tY~X߀x\t`b6 _l?¤̽~> ;"1b蝫'Qzo( 6=h #54 DG*Fǰ@p+MFX(x#f*z L/DnbA!|dRtwh\R6sŤu2%bLJ Ej4mӈ2 řzzRqt /;'Qc-a?ڼa Oaw`~;7[lωljZ.ŬjK|D.ne=bSRH@^Fۺ{tZ&>.咈?>iL}]9㨻u#.ێQz/Lmk_(˳$!hWiq470ƞqZ@n2=8Q M8ɐ*[R O5%lT}ADuwIٙZ4^Aٜѻڨ6ѾՌ隯 lz~å-QwVŶD>e[&CSԇMK+~źu}7^\IKBsza{ ^ t|bݺ"85aiݚ@wژbImz>u G|S#]mo7+]p=f7r 6&mْ̌8#gFROk^=vA٭SbzJ-P'xJԬܫu}GL<6 +$J- z R$/Ub 㧥JՂJ/JY)v'a~!u𦳕^qM`IX)iV9^2f,FxBJ9 +e:J7RKU<[%Z)O\fMH:Roz_7$ 7> l R_W\UnҬBR1R_W aҋRӬmwO`48-uUj8D]b⊏qŧ厶cZ)&|^zVO|I9T'VFݮ7aO$4£ԚVzVJ1J).Za{6:u6KW7Rݧ"Ӓ^W%E^>|.A O*mڽ fdv%Y#UK-GVS?sm\qWfynS? T4Z(KpKyRIVJP.JvAײVZyVfRbm,&Þ_*"HJ5F~X6_[z^5y\d aA_dX yg:[[+gvmIA9S@)?4+I*:WJNQ (_s 2Aiڷ}&͵"I+H@D,-^G@)'i쉺ΡYո6}*KI>F;]XMH a5U|-!Z +%s]phZC6R/ DnxG&i#)L[i+ER/ ?/.; e"rAK4o5z; >^@62 Im@3+sse1MU U"zl$nJR#se>LF5'l<ք@å]{:"HB| +nä) SKUŠ*M8(<>RbUl@(_`H1עsr*6`N>&HϹقϹ bR JNB.[$ܘnX,1DU'ق*Ri9SAiƘFz}"$hOKKAT-.lq˭zϙ%y6L@^+-*~??qm6!U{HPsϷmy|~ҁy'ͅ~sA>9~{Gmx V5>bb[N 119l&x]"BdMruo$?Eb>\aS[;ւ\ l,if> ~"8X._]^ovⓄ{:M/'-ŇkXiԮUcM-}]N;mxXNڦҨ܂..u,G1L,nXwyKc][%e5m 4t@RX7.7kB.r당zjݲFɷ_tU@ِ|6Z(0d-auNh.d=JbJ [t6Z(uUjJm#jAJ#ߨkk<uZ'k}ܓk MM!ѓ_^z7!&bb:mQǻOYMn]Xwnl #ΪwItnAF6."[Dօ|&ئP2^7}RΚOIMy> OvaTDR_WIXsFfĊ7pU[E>,~.QʮZeTsB!ٝU#`Dd'-4fj&9t>J6Q 1T|T"dTrطR$S) 7Ht0-S}vCĄmYQ>ǂm_`v|g^=YV81S-M>1 xL딯-iZҳK4x߁&q@ƛ&i꫼WJҡ8z֥D_4JG៾~35iCәz*ٵzf׍n8Ğ砡+uT.iM9<ĀA*dD-_Yb)w(!'VͩA.PPR 3F2gol%L+& &8R){+r27P^qB B9#2%{`l!̱]:8Xz q_c  ߫/T~ 9&HQe6 .43XC/E8iiLj줾2iW+@\v/ ,X7m}]vOz#zgR FQ5&hYF8 "x:ڀwc;GڪcHz(;Oqs<$7GmOIS>=&ўOF}k|TO ϟRNu[3 ]%S4@,a6A^ x8p;?NU\u\tuȪrDztmMn><%;^cgËмLPҚAAXDc*5 ".Ba*Nt`f =Bdp ܞMy:͠@ "WbSU%EOQxrBK}kY)8' As;+`0KsKċ=ϔ^XKvrICͥ74Lbn*5l2zs ?i̦1Ԉ^8_rV㔍Jp1[bAM X|xJf#YLD@ 3G\+dz6ow hAN$D_C0lv?UN:ۤ-z{ ۺh Ӡl辮q, $W@"NF(jmUZ.DҌa1ҪؚaHzL???.N;1&l/N`)EfM5ݦ^2g*Wȳ]|PXeXEx遷?[-;"(K`)5‡dYȳ8.5ڋ׈@:]Rs(ÛhEܳ:?#nT.>)]+bo>iv(CF{9}ĬJpx̫)9c^®ϴj A|[CUOI'@Klt1d1޻xf5rSǹҜN{RX[r(U޿;x .;Gx2ܡH}H_L7& yzCĀ+ WNӳ{1{&܁|&sp-?FܥĤ1|0Ӌ=m돇Ś̺zե5?sʨj}w-?2]}ϋt st0)}5# N*δh]>Mi p9u_Kpԙ&;ʹ}jW!yO)<خuU.רA2UzXNxi4H,ns&= 4BԆǪv=?r Ҟ媦F8'㇇H4xp<}|``C+I1VÜ֡bp68ĩ,f^^j84Y1{u{;V6WD{:T(aB{EGw}ߥ7nj $udgeDt>=@E(L*QHGa*hQh+UdLkEӾ(EQCA<5 z[<^T߶]KUʵMdE'H DOLa@Pg /B"r 1P4Kb4mZ>XsA~)K$F؂JCp΍\ KZѪFu xnEDX+R[[m&k*][H -螴K UDžƃHGڰb%MJ+>H+(]i6׵ЉB`שJ@R ?BYZUY"K豃ߐխĖ|{HSC O[FA u_gvwi0cwwr[K[tLbr&xX2jhEiLh\NxtpEaIP0=r{R}SAw5o Ր(- @aWLdžabPЄOA˴֣k58Iv6D@ZR@,<=_5]쌖:`sbK~pXcF=ᛪ%G:8ݩj^l7Hk0XJh= zdȑbc;mTDXJƀr|/۷_`&Tu]~t[c+ޛ?>J}huqߛ>C d8x' 5K.h&)^`ݧuz-E& Ī,{ً?zcPP)zHPPDY߸UǪ)P  ߊOqY|򕅨{6=icv^i̞ftifW8ωT>M׆ZvpeÇ79kcwC9a/q ̝ͥP[h) h+r8i sJgx RKB^:WqFi _/4Va;3yx~zu_y|fW⧑Eǫ(ĄπHt߬_]W_s#7.; ?|F&+~Y6njyQ|e:팚"RS{}BR%d^z^7[ SEI 6uikH0׫9MZnu{6 sL6*o75Ss)IL zh0V*Aର!qq/ҀRd<^(ͪ@Q I0lStqv=+q '\`7Z@/G*3Ak )(# *L5!{"]cvݹ$`67z- a!|:"ʨB*`qi9J0GiD +K_Qd @ 451FepaHa8]͊ebB1q SCȽH glN.T҃ZyGu/hXк&.ZU3_D'm$W~ @!@?N & *n-ɽLam6KU*=i[u6$wϿZaҶ|xygPI)!j&DX }L9@TVj˅CR47KȻkQ)tҾ;X&IVq}N$P\~ep{QxFO y.u]wLwޑTd}*aNZ:'Ϙ4J[IDL**R>tn{@Qjt8Fw{xV+mȏwY!M6K0Zopb{@$rc=yf0`G{`ɻsQ=„̙/ڱP㻋w .=zX|x7Dʫ :Ǎ uoGFHj FRNHؓv.? Û '\0YTda|)Մ׊:(E]s|'|En-%)oJ|i\ @NpTK-ԙ eן#Q*bc:Wv"JQܠwt$.$pk@Pv+Xu#T.t$a+&ᰝ6Iaޜ5bsה#B@Bήwqܵ o'y N2ѻ`%נ+߶vq4>lOXY<hu`>of쯐gnF3j8{M7ws6M{Y׿\z3 E97ׅ=zZM CM|X,T͙[q}=mg>n=~ov.lN\S=Ey(`I+svj<]akD'`!HlŁ:9[6{<{1u|mD kN vΩ{49A s>nޡ7>F,I0}9] 4]-l9NX;̒a=MhN7oVԋ%m*r0dͬ)M<9d{yv|Nْw:&ʃ۠8=#=}0epAo#/\ 0C xЍV 8>oVm =YD"RoˡU(Vw@Y"ESFS@%kCHH:BWtԊsjBi*4;<{^tk~n f_S DuBp\ԕE.J*Ht:]XD+ERe "A^UiTyR+|!iAֲ@N)!8 8@/6q>'h6(>F =q 'ҁ8Mմ"n 4SOyb(bV)'zr%-Vv$;(m6$OJ^n`W#0X3suP8KA]/+< CG*,8Q3cybVՒ? ʴe,VI~j=!t{:b6mMZ2{o27t뺵aoNc')9S&/pQܽ^\>V{Xwԍ#<-JxE6\ġv\EYhXm[KϝLR)e99P.YHW` N ι_'u]+uA*Y63D"y~"ڃPI#' +U &0Mݩ!3&gJs+XUZX+c{A\siA:9n-9/Md& F+n&7dxDJ.ZQ #z@܄^s)Jz}g.EWNwOTVq]L!Di?=lNN@7,@<-i |є|aU*a6Q?CveHމiI+2?Ƥ[3:yuzh5ch=.x(Sʻ z,TO ݽ>$BKF *@" []cU8~`Q\G_==:w1PGA*;>qVGH-ƏJ繳DV !s7VEښ;\p~t9 ^dY~Z-XaOeWB ɭ/ί갵dY_J(bоnu})K9%*i8~cH5 |\h 3̳ưB"V+Aid{ޱ]7jh XKV\7 %fc_5_wjx /sem5NkƟAW5ק336ix*/pA)p~5iN AJ%Ե H'ju+e@|!8}ު`l\ jمk4WZ/ifoJ_Nw= jDU#JW26J|.t$ъZ.b|`ilHXm~ + mg wr,PѪ'e#Ї2#W 3iVTm/smERm20r\w,?v@]*њtFeqew7C5s FŽpҴzX;];uY9rXQEaʽegޒ{y9v}I%vꗛ;nբΛ ~]I/oX$w,}? .h{.}-֮?R2P@IJyuû-S!qޢ.^,MT7dcCijfDʵ."華 z߭~`{7&X,tbRU5$b)`r[՝+?jp\]vOy7k6xQ87̄ݴU#T"K;&oj6 2y! DtM@`2-.C9':2GBIܴĔO%謳V5,b'D ݺRbX4?VK2Df.' |8GH߾\*}` a'Mv5TQM/}V.G™)2+~3/Hw/lUU@>X/C BuT $ȣz6[.ZVO7ξ)/7?ůRGokń$| H\iiᾔ!n`' _j ɂgAU)P&fuR* > ѹee_q#w}!TR3blyS͞@<85+KU_d, hU]BۛJTs }.d-\[ S,(OӎKN.0so*"2]_sLJPpά5C> \ڻSxDsnh鹹K }pOd* $:D.I06ѩiƦD1r.(,rR֜'i@]3M"@nVn~7nɏ[mCUgy\ErҤ@-ϟ苏⤅N`9N`;dC!ӗ* EVy֤&Xf *kQ0X9 Ubz }W-sdUd/h2DBg6x-m􂝔҅g4xjcQj("?etȤ6$P1^׎ cŴ}B;3pl3PxfR_]ʔ1&1Up琬Ĕ޼싙}/xZa7'a`ŊvSnsF]Sk@9ܰ40t(AX'K3]~f;:6 3KiXHO䥋*E%̠( CGá8V ^7 0~GU^4@g9YE JV$q݈6k9ɯᅴj,t KA{gZ[=H6΋vz$ "^r "slVE5Ś#L8q\-=l{9m:TsR:M⒄?kF*-EMI$'$k+EԚZ@rpӥҕ$(ƊU`#JâhQm +Xvk5 ;pfi:kOr~ZDDpoD:Y:2C۟싟tF ބ/޵#ihZYU `0Y`x%Hbe[3,GG;΃#"Xxŋ~_ bG[WHbBQ&(p:yôJݼb\4>rc*.bb xފ7bθ*O6Fem0ѣQY4 9 jI^z.%atECOtt棱A&LAeQWZKf)& Ѯ]ѷvܡب_9|Zy{ui\K6U%.A~͡Bc!Uܳ>݃xwB=*ea>h1*\s?L&#|(5ő%KMI5yc`xZw Epq %.^%_" fْX`+F%Heͮw̿]N{"{|u6\S݋缼.0jdI)8A0:N]:mhMV!oڲ|E,"W ;W<[=fӫţ-2u .c_Y=}.ڱo|by9^z0#XQ bDlp{NE?Z Y^W9כph*WV\*C,ES%;ZGQ8W#*Bh!)g4`$Qh6/VR,Dϑwz4gQS&ȥ7O*vl΢|о` ,d_H*2jb56ŧZ% y%=6 .jDj$,Zd@Sӭ5K }BG8z#!~8}$Dbr%Х7ڳө5ȱMd=UïCFgAU$yp VW g#.#ܗ`,Sbl B#[{l|y8@9*ۅJr@=?GY1(ŅCMvQZݯ5W/j7:#[!2gW|^Ptw)~dڒs N5$.HUE =8栏`3 xmk߳AK\TlPgAs8OgͿzġxYKǃ%d5j8])4쐜u\f.0>u0.j9vl蘬^Uߩ/J )JիK -h_ĺS谩Mɒd* w(6+=%<u8.sN\ooh<n1(!(,5^Ҳ4`5sŒ< ߾&|Vs1է]Ք)1ZutNQ$8E]~QO'kMoVX{+{|00oй߯bsoXK1 7,)PjT)G'ё  -^5}|SK{8ﵚ,izG"VIƓ6ɛ&>T2* [27^)MaeМ"yK*H4fRdi*Bm3*0exV\صU䜒5MYE, 5VyȂ0:;q"/)~zK^4hɻgK7 [kh{DM_ ~u5n 4h< gCddm#.] TX+Ν>ʌm[o{;}NpyzР? p(A%Y |.:+\Ah)sҰD<¦86]^qz{XX{,wX}!WS̮ F9ϽE^r:zcx݋&u~3[%kP)^^6UT 0;P?dt{8SyrBAy}Nx႐(aa4??0cwCPJIYrTϗE'6[X𻄏W5;KA4[7_})%Y{rA4]XQ{G[IS{}t(8d<U)A@jJ$%A$YorAST!؆+!bu75'/{BN1\ϑ|_{~ |9~'~Qgvn)ڐH`}Su}o]cɼ],__>&0r&I|4J-{CUʺFv/}A!H5Da|O!OE--(@$AbLc';׻x{_R~+'UUbFW,Y+9/c/{$yf_Cr V_tE"Ɋ/)ؓnIvq돺^$dGu/o\@^<o`e5~uq}}O+p~txyvQH3HϳUĽٮl)G_u]~Y BOƞx䇓,ZL)vP`-OTD$D@+XOg_?<mtAL{Cq wmU#-?w|l.zK'^i?W Z{fN~J<,)F)Oۛ҄{]AOȁW͘Ȇl('N gYn"w8"ß?n(h{JCU~XԟA"`5`wƃxތe)ˮ^#}?q|D7)_ZF5J9]ClU֢KKM\ )F"U7=@9HR^r#2*٩ &+f \ f`ޡ_Kn~r>0d ;KRCuUl!-4[zVК/Kg߆x?Dmng87Ksa4xh\KwCeכby[/Xkt5vw{$B=.,}Y;B –s*lE1sw Eo>N^P蘑 *PH[*@R$0,bT(ep%ɶD탺GfՍC]=*L!cLUh A~U77w?jɶ7{77?2]Q\aG7jSfc/lZLz(~oLoEصzӻ fʆ847|MDʸGz`f 3ˡ] x2HD*ۯgCBʵ%>`8F.=&Ȁa򏭐5 /NeC*?YhU͵I^"|3VL\fuWn@Zf,K+n޵5q#f7CtNj8/)0,EҼ )jH3$[)n4.ݍ}4 Ӳ`<3xW<Kd lf6胁V5&b<0ϡGsr Rֹ q ߊC%R8UCmkaQT[ e(cKTIm&y"]LPC-va[H<%YhBȓ3-BHLČyL+1mMHLX+@E1aC(֒Y+~!ڞJgM`FZZSr"4۝YIa9<gT."fk2<#/[ͻJ]umcy냩Fh%,ob $]gXFfe\ч2+qN(Iw\.d`x>/Wd63,}, !,T/1?8}|y0K$2xU.>ZN$Le("V66`m,Nek0HcAl (Y㗱n\ @ 8A"-8CA9_8ͯo=ɱ ,JIs%^L%AoOѥ\nzގ S5A(I> C7p}g#f]%Y*s{`y–kE*B- އh8s L>_~^W4! ȇ8eF JSjz@)H H$YvDXJ_VGAtHݜ4 Q# C7gtYPL9b4^ grosc"nXe,sg\fQ/kβ*@o4o$\ "sAWH=n%S\>]VJQMOտ\٧z!^^AaiVp*I&/Mvqq![[[E`υ[JA|t 8.2!X[N1lSK?(TX79t5Te QpfυA).[\ = eӸǔU<2d),2Kducm74v\Y"jb66w[Ƞ7|t:0F3dB8Bl49X-6L!Llƞ_X*V ƸJ;p;#+ْ3ې `g <&>$I|(#R.\Igԟ sD.\WwB_ߔ8#سeb9)[>^RޏUؽ ' E[i8qqdy&QĽ_3HkZ5]ߕ=PU{{7QK뷅~w`e<+𸃣%H( .` /HXv[X'7i ܸ[Q.86灣&!˲!xw3`wC̃ͬJ` |:E'anx῿uɵjx}~d>~<牋*2zLmԽp׃ *hLzPE9V{(ZqCtLyw|(%3? sO" ;̉%u(} k9Ԛ1\afGr%\?)f|4(iFrzqNXwXtCxaM@=H8Hd#H,%4l G٧^*\jߌsC-L(BT_EtbV^ӫDHWZVow-J~2A -tV``*0aeWKw@PDmz%FJ[fajra= ŭo?anjU뼞\O)gmLH&*EFOmJgZS(p$RL!Z4"kels A\̐{hD1.Bc/[?۸D."b혔ӕhsY&,ŭѻA؁ɍ4by12JPzE,9tX{G }k3Z[ģ;v2]Jl*B.UH¼8%t38;I0%Ɣi7CW鷋z$3Y96 :~{sp4!nRt[8K "Iݹ~0>s>O:Wz4=L}moSaMCȢ3a: z_DArʼn 7,ML+^*℆ji@>4}ګMzYb3F=-i6!~f3i}8˃x#R(CF QQ<[,瞽Kk֐P]NkݖMNR`%>ߜj)Q>IB(7$׾6:V=Xs,z.e@Hz_rL5pp$'zɨsB84ߥw᫓ ?ij ՄQۢ6JM!R,1Y\_C+HGI],S4xу^Akoš qb(<K3*ċM(q|_ 4 A6=DU+5$~ޯRSY^!I|rH D)ƛm r*i9+]p)X'6aōcA¹ƒR1 2k {c-̣>DT2ե_VhCTݣZTDIs] ̈́;<>ͨىH_aYqVb#y, V!aKm"00aú|`X1Oz|u~\].I{4+˿7B;Y%{jHu9e}_Lw N8ELbsT-SM$=DicZTqnhIr Ny-=.k,>ӌjA\K r f^(#uJ :z譪S]_Kg깤 !RTIH@jc>V0B0d85cV HyGAK*\񭉼S$yf]56Yyt7K=Oz8F TBT*8o-lj 4JK4cBKljZ.$CRiZ X=Q -լAr҃R+E.XHz}zZs=.5/%[HzҳeJY9`R.BF1[UP{$|pgN"kw}WozOv޼ލ;z4ts]t + )m6Fy &>m}u>.5*0{"=YEE &n{Z"`A2~/B](-%q9O2g&:{-M3[<&ݜ kww5~/ NHe+w =.iVC t{ɬ<tEtLL^<ppHY:%=o< 5c(Ҁ"@0Xp,WcE/&}NaK?:jH-$icT$G g[ nX~.qJ)R'f~FU`aQF6DS(߱nB`b mxnZIѦg ӆS F" jy,WŠ:`(f!.ȚjϕEU՞ks+\[ zx|?~4!(XBs`ȐoZz6}"^&G$1H ]߷IljR7Crf8dWŪ~~_ɾjc6(oSeƪv1H @OͷӲ,\!ibW4-cZj|Ĵ,i6vSWNA0C T.n\?\6y H]A7HM%)17SMMTJcj>M[[ Kklr0tvNE ?wK} u櫪_(2^}|jI/|up΀wxoB!r'WCww`~zK>|uO{˦+_}D *XЎO~㼺iʾev⮯"ý_F u4hBw*[x!C۬շV]|$GC7Fu[I UE4c|P~v#* 5~\%z=Syx#Zc ~q5}lp]E.Myk%2!#ꎩɽmvN̩Հ| ic-ZC:b~{X%HwtPQݩղ.H%G% ΤQ1Xj„5 ,VŭE/]e@CZ!q]|`ejhZ^!bB(iyT[ٓ2=GOv-4Jtw^#Eki |JrH)"䙇hVL1Fe7knĄN3RۂSfn#C4',ͫ!b:Hn y/nSD3ѝ<('On1O7NfdD2<[I.lsiy/,3vuZ5yoH9fPyyg<_~ų@Ӂi4,.:#̐y=rl`/< 7U{Z}snNqb)4ːW *T[[S1)+V#6$J뇧:"fE(4ʃRԥrV,/*kX[ǎB HϠgE:w E_o6LM tsXU/^oʷ_~iO?1ǿ~~c&G^&RFMBJѪpJtR@4hT]iطhJ)CT ȅu,Tn;1,)c(U]֢TYgJmM <ײ 6v6Mp*UesPt "8a 5F zX~CzM6V~'oW(F}% O e)iF, eHC:R;yr)3 O& {R2i͵"25jVnXs1a}F rEY lDH")"䙇hVLY}e/ [DH&Q HY3(E2V"dy!S~vSU21*H-8u2_-Hlmy!:>\xD?n1;$\VI1owSlR>Mal:rZ&\E>fՂBVzQ*;zëR<*T"IOpЃ(/E 14753ms (14:33:42.085) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1717933797]: [14.753556027s] [14.753556027s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.085550 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086165 4869 trace.go:236] Trace[1641039331]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:31.951) (total time: 10134ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1641039331]: ---"Objects listed" error: 10134ms (14:33:42.086) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[1641039331]: [10.134475821s] [10.134475821s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086200 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086589 4869 trace.go:236] Trace[314944033]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:28.267) (total time: 13818ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[314944033]: ---"Objects listed" error: 13818ms (14:33:42.086) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[314944033]: [13.818823153s] [13.818823153s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.086610 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.089518 4869 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.090513 4869 trace.go:236] Trace[514746400]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 14:33:27.996) (total time: 14094ms): Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[514746400]: ---"Objects listed" error: 14094ms (14:33:42.090) Feb 02 14:33:42 crc kubenswrapper[4869]: Trace[514746400]: [14.094105574s] [14.094105574s] END Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.090536 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.099372 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.157959 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.157987 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.158039 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.158132 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.264309 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.266243 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.284168 4869 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.285196 4869 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.286819 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.299044 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.303096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.314037 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.320538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.320845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.320939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.321139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.321216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.347671 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.352303 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.368065 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.373765 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.385559 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.385709 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.388168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.399537 4869 apiserver.go:52] "Watching apiserver" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.405209 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.405706 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406221 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406319 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.406410 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.406884 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.406970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.407056 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.407098 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.407147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.408802 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.408965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.410633 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.410833 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.411865 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412232 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412306 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412392 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.412430 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.420887 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:39:17.508903278 +0000 UTC Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.478313 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.490932 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.494519 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.504692 4869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.512053 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.515444 4869 csr.go:261] certificate signing request csr-mmbhx is approved, waiting to be issued Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.532344 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.536449 4869 csr.go:257] certificate signing request csr-mmbhx is issued Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.550592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.562827 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.574856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.594000 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598223 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598369 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598410 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598445 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598524 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598571 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598679 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598715 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598736 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598757 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598810 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598831 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598849 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598866 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598926 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598948 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.598981 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599003 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599025 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599105 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599123 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599163 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599186 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599210 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599293 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599370 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599411 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599433 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.594374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.596945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600868 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599409 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599473 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599679 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599891 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600270 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.600827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601034 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601181 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.599453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601314 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601328 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601423 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601449 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601471 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601521 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601540 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601558 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601628 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601664 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601788 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601810 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601928 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601953 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.601996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602017 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602054 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602132 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602154 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602202 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602251 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602357 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602380 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602402 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602484 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602509 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602531 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602570 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602588 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602624 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602644 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602665 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602683 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602720 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602745 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602792 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602842 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602878 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602897 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602931 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602967 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.602996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603050 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603070 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603145 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603152 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603205 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603292 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603312 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603328 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603345 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603362 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603446 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603479 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603496 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603515 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603531 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603618 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603654 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603689 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603708 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603726 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603746 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603763 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603840 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603873 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603890 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603943 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603959 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603977 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603994 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604014 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604033 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604051 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604069 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604102 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604156 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604174 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604210 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604243 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604298 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604547 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604613 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604832 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604971 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604984 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604995 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605005 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605014 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605024 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605035 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605045 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605054 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605064 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605074 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605083 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605093 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605102 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605111 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605121 4869 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605130 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605140 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605150 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605159 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605168 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605177 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605188 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605198 4869 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605209 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605220 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605230 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605240 4869 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605250 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605260 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603597 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.630239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631009 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604021 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.604625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605293 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.605779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.606296 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.606527 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.606763 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607017 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607257 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607542 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.607895 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.608050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.608119 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.609160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.609256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.610140 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.631973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632361 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.632871 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633322 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633869 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.633275 4869 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.611427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.611816 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.612103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.612497 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614226 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614456 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.614786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.615166 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622469 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.622779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623269 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623469 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.623711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.624015 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.624032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.624982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.625229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.625954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.626330 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.626697 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.629142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.634720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635099 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635417 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635423 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.635624 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.636062 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.636487 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.638146 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.131511536 +0000 UTC m=+24.776148306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.640827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.640841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.640855 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641106 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.641120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641133 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641148 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641218 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.141199014 +0000 UTC m=+24.785835784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641279 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641289 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641298 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.641326 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.141319847 +0000 UTC m=+24.785956607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.641635 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.641952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642337 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642338 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.642723 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.643018 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.643129 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.143089442 +0000 UTC m=+24.787726282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.647027 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.647232 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.648612 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.648823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.648845 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649323 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649398 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.649741 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.610450 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650162 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650452 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650646 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.650900 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.629141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.643185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.644329 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.651245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.603738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.651616 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.651698 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652772 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653538 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.653747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.656621 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:43.156581526 +0000 UTC m=+24.801218296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.643346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.656667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.652604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661573 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661839 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.661962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.662048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.664478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.664477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.664690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.665401 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.666673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667395 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667410 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667564 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667714 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.667994 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.668427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.669117 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.671724 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.672116 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.673664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.673802 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.675054 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.675220 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.675559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.676717 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.679628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.680479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.680951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.681606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.681961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.682356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.683768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.684055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.686995 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" exitCode=255 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.687081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.687159 4869 scope.go:117] "RemoveContainer" containerID="b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.687317 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.694665 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697071 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697152 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697287 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.697671 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.704755 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.705622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.708796 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709789 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709806 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709821 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709833 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709844 4869 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709856 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709869 4869 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709882 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709894 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709955 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709973 4869 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709986 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.709996 4869 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710007 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710018 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710029 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710040 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710051 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710065 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710076 4869 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710090 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710135 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710150 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710163 4869 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710175 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710186 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710197 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710210 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710221 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710235 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710247 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710259 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710272 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710282 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710293 4869 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710305 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710316 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710369 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710380 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710391 4869 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710403 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710414 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710426 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710440 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710455 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710466 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710478 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710490 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710504 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710515 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710526 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710538 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710548 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710559 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710570 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710580 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710591 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710602 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710612 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710622 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710632 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710644 4869 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710658 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710668 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710681 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710691 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710702 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710713 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710723 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710735 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710747 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710759 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710770 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710782 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710798 4869 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.710811 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.712458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.712690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.715888 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716608 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716659 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716680 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716705 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716717 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716732 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716746 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716762 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716774 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716788 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716805 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716818 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716833 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716845 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716857 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716869 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716880 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716894 4869 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716922 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716934 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716946 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716958 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716968 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.716987 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717000 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717013 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717023 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717035 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717046 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717059 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717071 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717083 4869 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717096 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717109 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717123 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717135 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717149 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717164 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717177 4869 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717191 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717204 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717216 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717230 4869 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717244 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717256 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717269 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717281 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717293 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717305 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717317 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717330 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717341 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717353 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717365 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717376 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717388 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717400 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717412 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717424 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717438 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717452 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717464 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717476 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717489 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717501 4869 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717512 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717525 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717537 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717553 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717566 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717579 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717592 4869 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717604 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717615 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717628 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717639 4869 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717649 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717661 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717672 4869 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717686 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717699 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717713 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717726 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717738 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717749 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.717760 4869 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.720499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.725684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.726127 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.727282 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.729541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.733323 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.737787 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.741073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 14:33:42 crc kubenswrapper[4869]: W0202 14:33:42.741836 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d WatchSource:0}: Error finding container 5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d: Status 404 returned error can't find the container with id 5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d Feb 02 14:33:42 crc kubenswrapper[4869]: W0202 14:33:42.745185 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389 WatchSource:0}: Error finding container 249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389: Status 404 returned error can't find the container with id 249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.751198 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: W0202 14:33:42.758124 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11 WatchSource:0}: Error finding container 7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11: Status 404 returned error can't find the container with id 7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11 Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.763658 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.781116 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.795573 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826011 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826056 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.826835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.827017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.827043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.827064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.835438 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.866324 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.896322 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:42 crc kubenswrapper[4869]: E0202 14:33:42.896592 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.900136 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.912050 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.930840 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:42Z","lastTransitionTime":"2026-02-02T14:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.945273 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:42 crc kubenswrapper[4869]: I0202 14:33:42.963318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.033793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.136833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.137578 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.229771 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230033 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.229982275 +0000 UTC m=+25.874619055 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230533 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230618 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.230607951 +0000 UTC m=+25.875244721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230628 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230796 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230837 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.230786 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230952 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.230927299 +0000 UTC m=+25.875564079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.230989 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231018 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231033 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.231104604 +0000 UTC m=+25.875741534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231252 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.231387 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:44.23137562 +0000 UTC m=+25.876012550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.240817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.241477 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.344547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.421048 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 16:16:07.179248412 +0000 UTC Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.447901 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.462241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.462409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.466824 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.467519 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.469343 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.470180 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.471488 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.472159 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.472956 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.474229 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.475057 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.476238 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.476865 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.478331 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.479038 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.479713 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.480890 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.481692 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.483041 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.483610 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.484399 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.486048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.486644 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.488118 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.488717 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.490448 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.491460 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.492544 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.493394 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.495263 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.496598 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.498123 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.498751 4869 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.498897 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.501782 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.502564 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.503152 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.505388 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.506878 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.507669 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.509235 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.510181 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.511441 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.512298 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.513685 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.515191 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.515879 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.517146 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.517834 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.519586 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.520263 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.520900 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.522099 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.522808 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.524971 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.525617 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.538175 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 14:28:42 +0000 UTC, rotation deadline is 2026-11-16 15:10:47.10648997 +0000 UTC Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.538241 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6888h37m3.568252016s for next certificate rotation Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.550691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.653952 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.691660 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ed5825baeee6b42bb9774be17f116804605bd3799177814b1c7fd9c68f72b11"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.693519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.693556 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.693752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"249f9008ba910ec308b66aadf3a6b05f165f582beef63184dcb57c683e2a6389"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.695007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.695043 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5b1584607f2339f1908a38e831e96997563182abb2b966f51857b3f34547750d"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.696450 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.703600 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.706363 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.706662 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:43 crc kubenswrapper[4869]: E0202 14:33:43.706897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.713480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.729265 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.743113 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.756627 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.759138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.776999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.794537 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.814516 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b73d1954bb7b6bacb4bceeda2fa08b622e61fefa7ca5e1b20c18ea7ac4197275\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:24Z\\\",\\\"message\\\":\\\"W0202 14:33:23.822540 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0202 14:33:23.822872 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770042803 cert, and key in /tmp/serving-cert-4014544013/serving-signer.crt, /tmp/serving-cert-4014544013/serving-signer.key\\\\nI0202 14:33:24.401431 1 observer_polling.go:159] Starting file observer\\\\nW0202 14:33:24.405042 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0202 14:33:24.405279 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 14:33:24.405989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4014544013/tls.crt::/tmp/serving-cert-4014544013/tls.key\\\\\\\"\\\\nF0202 14:33:24.945153 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.852089 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.859195 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.877961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.899589 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.919203 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.943502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.959716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.961864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:43Z","lastTransitionTime":"2026-02-02T14:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:43 crc kubenswrapper[4869]: I0202 14:33:43.976489 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.007661 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7tlsl"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.008180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.009660 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dql2j"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.009992 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-d9vfd"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010080 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010177 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-862tl"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010793 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.010995 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.011188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.011391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.011997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.013992 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014004 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014137 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014217 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014405 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.014592 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.015316 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.016156 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.032268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-socket-dir-parent\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036276 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdcm\" (UniqueName: \"kubernetes.io/projected/a649255d-23ef-4070-9acc-2adb7d94bc21-kube-api-access-5wdcm\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cni-binary-copy\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036384 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-daemon-config\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcz5j\" (UniqueName: \"kubernetes.io/projected/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-kube-api-access-qcz5j\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-netns\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-bin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qr7b\" (UniqueName: \"kubernetes.io/projected/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-kube-api-access-9qr7b\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036622 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-system-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-system-cni-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17c822d-8d51-42d0-9cae-7b607f9af79a-hosts-file\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036829 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036897 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cnibin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-multus\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036978 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-os-release\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.036998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-etc-kubernetes\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a649255d-23ef-4070-9acc-2adb7d94bc21-rootfs\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-k8s-cni-cncf-io\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-kubelet\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-multus-certs\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-conf-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037120 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a649255d-23ef-4070-9acc-2adb7d94bc21-proxy-tls\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a649255d-23ef-4070-9acc-2adb7d94bc21-mcd-auth-proxy-config\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037164 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkw2\" (UniqueName: \"kubernetes.io/projected/c17c822d-8d51-42d0-9cae-7b607f9af79a-kube-api-access-jvkw2\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cnibin\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037200 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-os-release\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.037237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-hostroot\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.052172 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.064493 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.071287 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.084703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.099569 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.116900 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.133442 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcz5j\" (UniqueName: \"kubernetes.io/projected/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-kube-api-access-qcz5j\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-netns\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-bin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qr7b\" (UniqueName: \"kubernetes.io/projected/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-kube-api-access-9qr7b\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-system-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-netns\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138366 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-bin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-system-cni-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17c822d-8d51-42d0-9cae-7b607f9af79a-hosts-file\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cnibin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-multus\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-os-release\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-etc-kubernetes\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138597 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a649255d-23ef-4070-9acc-2adb7d94bc21-rootfs\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-system-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-k8s-cni-cncf-io\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-cni-multus\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-etc-kubernetes\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-kubelet\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cnibin\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-multus-certs\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-var-lib-kubelet\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-conf-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a649255d-23ef-4070-9acc-2adb7d94bc21-rootfs\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138777 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-cni-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138850 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-conf-dir\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-k8s-cni-cncf-io\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a649255d-23ef-4070-9acc-2adb7d94bc21-proxy-tls\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138983 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a649255d-23ef-4070-9acc-2adb7d94bc21-mcd-auth-proxy-config\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c17c822d-8d51-42d0-9cae-7b607f9af79a-hosts-file\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139015 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvkw2\" (UniqueName: \"kubernetes.io/projected/c17c822d-8d51-42d0-9cae-7b607f9af79a-kube-api-access-jvkw2\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138892 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-os-release\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-system-cni-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cnibin\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139074 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cnibin\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.138894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-host-run-multus-certs\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139095 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139197 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-os-release\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-os-release\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139276 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-hostroot\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-hostroot\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-socket-dir-parent\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdcm\" (UniqueName: \"kubernetes.io/projected/a649255d-23ef-4070-9acc-2adb7d94bc21-kube-api-access-5wdcm\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cni-binary-copy\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139471 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-socket-dir-parent\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-daemon-config\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-binary-copy\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.139981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a649255d-23ef-4070-9acc-2adb7d94bc21-mcd-auth-proxy-config\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.140144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.140354 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-multus-daemon-config\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.140660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-cni-binary-copy\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.144952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a649255d-23ef-4070-9acc-2adb7d94bc21-proxy-tls\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.151535 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.159717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdcm\" (UniqueName: \"kubernetes.io/projected/a649255d-23ef-4070-9acc-2adb7d94bc21-kube-api-access-5wdcm\") pod \"machine-config-daemon-dql2j\" (UID: \"a649255d-23ef-4070-9acc-2adb7d94bc21\") " pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.161677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcz5j\" (UniqueName: \"kubernetes.io/projected/34b37351-c7be-4d2b-9b3a-9b4752d9d2d4-kube-api-access-qcz5j\") pod \"multus-additional-cni-plugins-862tl\" (UID: \"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\") " pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.162514 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvkw2\" (UniqueName: \"kubernetes.io/projected/c17c822d-8d51-42d0-9cae-7b607f9af79a-kube-api-access-jvkw2\") pod \"node-resolver-7tlsl\" (UID: \"c17c822d-8d51-42d0-9cae-7b607f9af79a\") " pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.162673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qr7b\" (UniqueName: \"kubernetes.io/projected/45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0-kube-api-access-9qr7b\") pod \"multus-d9vfd\" (UID: \"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\") " pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.168450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.170665 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.186978 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.205783 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.224607 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.239146 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240302 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240424 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240488 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.240511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240587 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240534055 +0000 UTC m=+27.885170815 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240700 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240702 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240719 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240736 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240872 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240828102 +0000 UTC m=+27.885465032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240941 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240898564 +0000 UTC m=+27.885535334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240759 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240967 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240994 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.240988776 +0000 UTC m=+27.885625546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.240724 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.241032 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.241126 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:46.241106129 +0000 UTC m=+27.885742899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.259188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.271576 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.275576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.294460 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.310695 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.322599 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7tlsl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.330249 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.330685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.342834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d9vfd" Feb 02 14:33:44 crc kubenswrapper[4869]: W0202 14:33:44.349315 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17c822d_8d51_42d0_9cae_7b607f9af79a.slice/crio-3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7 WatchSource:0}: Error finding container 3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7: Status 404 returned error can't find the container with id 3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7 Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.349977 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-862tl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.353711 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: W0202 14:33:44.363404 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda649255d_23ef_4070_9acc_2adb7d94bc21.slice/crio-202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3 WatchSource:0}: Error finding container 202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3: Status 404 returned error can't find the container with id 202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3 Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.381930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.382044 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.405822 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.406893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.413606 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.413883 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.413888 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417056 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417380 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417590 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.417726 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.421498 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:11:58.491157664 +0000 UTC Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.440102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443855 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443922 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.443989 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.444007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.463214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.463714 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.463873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.464034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.485836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.498278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.522414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546086 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546242 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546518 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546542 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546709 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.546812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.547851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.550927 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.550886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551136 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551126 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.551960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.581246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.581747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"ovnkube-node-qmsw6\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.594886 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.607890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.641723 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.677255 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.698389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.698706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.699200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.700031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.700133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.701706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.701825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"202ed05d71cb0717cc85d3bab105a12270594e716af32f856daa82198cefe4d3"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.703239 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7tlsl" event={"ID":"c17c822d-8d51-42d0-9cae-7b607f9af79a","Type":"ContainerStarted","Data":"3494e8706970212f0405208ae19c7bd1d0a492978519bd8cc8aa2cdc0f67b7a7"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.704495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"a93e9410ff4a30dfbea3fe2daa15381760bf35e7d117feef1fe49b41f042acf0"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.706126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.706235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"946593d04c6023c1d85ab29e96459a79ec8edef43fccac3ba1e08fbbc2505fc5"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.706954 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:44 crc kubenswrapper[4869]: E0202 14:33:44.707158 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.711216 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.735061 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.735390 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.759035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.773892 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.789445 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804235 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.804423 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.824176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.841159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.857518 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.872870 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.888552 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.903685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.907538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:44Z","lastTransitionTime":"2026-02-02T14:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.921221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.936005 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.952480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.965114 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:44 crc kubenswrapper[4869]: I0202 14:33:44.989490 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.004428 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.010635 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.113867 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.216996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.217007 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.319839 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.421686 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:23:50.176941636 +0000 UTC Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.422904 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.461805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:45 crc kubenswrapper[4869]: E0202 14:33:45.461993 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.526148 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.628683 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.710004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.712774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.714274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7tlsl" event={"ID":"c17c822d-8d51-42d0-9cae-7b607f9af79a","Type":"ContainerStarted","Data":"bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.720537 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" exitCode=0 Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.720710 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.720747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"ca0e0f37b2bf3d240e5eeec5425678446780834f9687e86b8adc4295de855905"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.724716 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc" exitCode=0 Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.724755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.731401 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.734407 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.747990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.763567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.782514 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.795315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.812078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.830596 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.833989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.834099 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.846073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.865864 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.881816 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.896499 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.911237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.925170 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.937311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:45Z","lastTransitionTime":"2026-02-02T14:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.943612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.956445 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.984133 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:45 crc kubenswrapper[4869]: I0202 14:33:45.999432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:45Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.017457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.035272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.039439 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.054226 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.068224 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.082966 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.096632 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.117857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.143458 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.247502 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267241 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267425 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.267387641 +0000 UTC m=+31.912024421 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.267777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267897 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267932 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267950 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267962 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.267971 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.267958765 +0000 UTC m=+31.912595535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268006 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.267992526 +0000 UTC m=+31.912629296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268030 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268042 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268081 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.268070708 +0000 UTC m=+31.912707568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268084 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268116 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.268156 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:50.26814652 +0000 UTC m=+31.912783370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.300298 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-492m9"] Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.301070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.303470 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.305727 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.306046 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.310803 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.319074 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.334540 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.350996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.351016 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.354025 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.369469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgx7k\" (UniqueName: \"kubernetes.io/projected/728209c5-b124-458f-b315-306433a62a15-kube-api-access-dgx7k\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.369796 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/728209c5-b124-458f-b315-306433a62a15-serviceca\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.369890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/728209c5-b124-458f-b315-306433a62a15-host\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.373612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.387171 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.401060 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.417849 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.422278 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:20:34.921069818 +0000 UTC Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.432124 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.449396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.454461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.462302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.462416 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.462541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.462781 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.463745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470285 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/728209c5-b124-458f-b315-306433a62a15-host\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgx7k\" (UniqueName: \"kubernetes.io/projected/728209c5-b124-458f-b315-306433a62a15-kube-api-access-dgx7k\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/728209c5-b124-458f-b315-306433a62a15-serviceca\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.470412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/728209c5-b124-458f-b315-306433a62a15-host\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.471302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/728209c5-b124-458f-b315-306433a62a15-serviceca\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.477554 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.492634 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.494827 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgx7k\" (UniqueName: \"kubernetes.io/projected/728209c5-b124-458f-b315-306433a62a15-kube-api-access-dgx7k\") pod \"node-ca-492m9\" (UID: \"728209c5-b124-458f-b315-306433a62a15\") " pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.513096 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.556559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.659619 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.709926 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-492m9" Feb 02 14:33:46 crc kubenswrapper[4869]: W0202 14:33:46.726840 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728209c5_b124_458f_b315_306433a62a15.slice/crio-a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2 WatchSource:0}: Error finding container a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2: Status 404 returned error can't find the container with id a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2 Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.732997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.753828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.763096 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.764939 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.779319 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.794129 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.808136 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.828326 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.829303 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:46 crc kubenswrapper[4869]: E0202 14:33:46.829535 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.833858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.850305 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.867558 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.870246 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.880950 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.902579 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.918557 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.937130 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.950578 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.967228 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.972971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:46 crc kubenswrapper[4869]: I0202 14:33:46.973067 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:46Z","lastTransitionTime":"2026-02-02T14:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.075744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.075786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.075796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.076019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.076038 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.183449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.286460 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.388851 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.422880 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:52:47.331376196 +0000 UTC Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.462536 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:47 crc kubenswrapper[4869]: E0202 14:33:47.462709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.491769 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.573194 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.577956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.584046 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.587321 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.594574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.603250 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.618430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.631548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.656896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.669567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.682536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.694587 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.696596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.708666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.720541 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.735184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.746818 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.758034 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c" exitCode=0 Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.758086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.761732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.761787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.762939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-492m9" event={"ID":"728209c5-b124-458f-b315-306433a62a15","Type":"ContainerStarted","Data":"8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.763009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-492m9" event={"ID":"728209c5-b124-458f-b315-306433a62a15","Type":"ContainerStarted","Data":"a78a1411c5a046e47d7c279c7ed978839bb810c063c259ccbceee7e969e9c7e2"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.765341 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: E0202 14:33:47.769032 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.777303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.789530 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.803169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.805171 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.817586 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.830030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.852998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.875211 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.891140 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.903418 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.906241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:47Z","lastTransitionTime":"2026-02-02T14:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.918805 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.933895 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.951120 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.964819 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:47 crc kubenswrapper[4869]: I0202 14:33:47.980900 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:47Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.009174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.112193 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.215414 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.318348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.421677 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.423739 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:33:14.034961264 +0000 UTC Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.462176 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.462229 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:48 crc kubenswrapper[4869]: E0202 14:33:48.462363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:48 crc kubenswrapper[4869]: E0202 14:33:48.462519 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.525169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.628621 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.731257 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.771212 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f" exitCode=0 Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.772148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.793969 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.810005 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.821571 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.834273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.842461 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.859901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.873864 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.890345 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.916059 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.928789 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.940583 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:48Z","lastTransitionTime":"2026-02-02T14:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.944625 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.958277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.974464 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:48 crc kubenswrapper[4869]: I0202 14:33:48.991182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:48Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.008182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.045227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.134703 4869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.148486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.251303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.251727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.251882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.252012 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.252106 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.354785 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.424698 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:25:20.582440906 +0000 UTC Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.457836 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.462468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:49 crc kubenswrapper[4869]: E0202 14:33:49.462578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.480544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.493783 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.516078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.534440 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.547483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.560752 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.566681 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.582137 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.597365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.612111 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.626778 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.640879 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.655361 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.663982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.664007 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.672434 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.685890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.767801 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.778026 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590" exitCode=0 Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.778115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.785853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.799823 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.819228 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.832587 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.851776 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.869246 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.871726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.891767 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.905064 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.920837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.950238 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.975328 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:49Z","lastTransitionTime":"2026-02-02T14:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:49 crc kubenswrapper[4869]: I0202 14:33:49.991335 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.016689 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.037277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.056953 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.077809 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.078666 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.181606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.284549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324054 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324227 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324191302 +0000 UTC m=+39.968828082 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.324379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324450 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324506 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324495529 +0000 UTC m=+39.969132299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324515 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324531 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324547 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324561 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324617 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324621 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324670 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324580 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324570711 +0000 UTC m=+39.969207481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324712 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324703955 +0000 UTC m=+39.969340725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.324739 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:58.324731855 +0000 UTC m=+39.969368635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.387456 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.425163 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:15:09.469192847 +0000 UTC Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.462088 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.462107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.462249 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:50 crc kubenswrapper[4869]: E0202 14:33:50.462427 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.489976 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.596375 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.699930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.699987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.700003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.700028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.700042 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.794595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.802367 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.823610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.837747 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.850810 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.865758 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.882318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.896799 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.904881 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:50Z","lastTransitionTime":"2026-02-02T14:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.915093 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.932217 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.948426 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.966574 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.982291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:50 crc kubenswrapper[4869]: I0202 14:33:50.998290 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:50Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.007989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.008003 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.015329 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.028125 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.110793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.216548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.216988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.217013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.217037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.217055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.320490 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.423500 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.425645 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:06:46.119090824 +0000 UTC Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.462413 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:51 crc kubenswrapper[4869]: E0202 14:33:51.462581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.526218 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.630333 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.733348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.802138 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9" exitCode=0 Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.802226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.809681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.810112 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.823380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.836470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.841191 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.844012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.859897 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.876722 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.896644 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.913940 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.925136 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.939596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:51Z","lastTransitionTime":"2026-02-02T14:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.941611 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.955301 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.980157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:51 crc kubenswrapper[4869]: I0202 14:33:51.994938 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:51Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.009710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.020537 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.039239 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.042356 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.052237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.064826 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.084835 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.100241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.116078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.129268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.146931 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.154232 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.170685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.185400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.199332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.213272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.225948 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.242489 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.249705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.256847 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.352376 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.426809 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:46:01.126607325 +0000 UTC Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.455938 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.461958 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.462049 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.462108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.462235 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.559495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.634212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.661803 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.666532 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.683482 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.688464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.708462 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.713552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.730854 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.735857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.735975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.735987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.736006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.736020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.752820 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: E0202 14:33:52.753020 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.755954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.755993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.756003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.756024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.756036 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.818897 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b37351-c7be-4d2b-9b3a-9b4752d9d2d4" containerID="99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01" exitCode=0 Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.818991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerDied","Data":"99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.819133 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.819716 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.839731 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.849439 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.859830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.860562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.885244 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.899975 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.913390 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.924703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.939423 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.956929 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.962720 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:52Z","lastTransitionTime":"2026-02-02T14:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.974787 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:52 crc kubenswrapper[4869]: I0202 14:33:52.991162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:52Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.009719 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.023597 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.037929 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.048580 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.061304 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.064977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.065011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.072989 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.087638 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.100104 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.113006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.126147 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.138631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.151752 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.164421 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.177277 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.180411 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.191072 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.204162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.217108 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.238939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.280706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.384949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.385236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.428197 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:14:33.689589829 +0000 UTC Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.462734 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:53 crc kubenswrapper[4869]: E0202 14:33:53.462884 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.488494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.596999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.597111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.700169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.802954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.803063 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.827247 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.827802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" event={"ID":"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4","Type":"ContainerStarted","Data":"919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.841985 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.854455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.867472 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.886698 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.906461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:53Z","lastTransitionTime":"2026-02-02T14:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.909510 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.927502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.946151 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.961095 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.976758 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:53 crc kubenswrapper[4869]: I0202 14:33:53.995312 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:53Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.008815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.009558 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.025643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.039929 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.061935 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.112164 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.214942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.214997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.215007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.215030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.215047 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.318865 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.422685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.429438 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:10:17.56891775 +0000 UTC Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.461990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.462104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:54 crc kubenswrapper[4869]: E0202 14:33:54.462173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:54 crc kubenswrapper[4869]: E0202 14:33:54.462257 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.526763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.630483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.733557 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835064 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/0.log" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.835517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.838319 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8" exitCode=1 Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.838640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.839548 4869 scope.go:117] "RemoveContainer" containerID="6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.853896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.869894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.885610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.900989 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.925871 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.938544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:54Z","lastTransitionTime":"2026-02-02T14:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.944666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.963218 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.978501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:54 crc kubenswrapper[4869]: I0202 14:33:54.992968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.016395 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.033386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.041308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.072751 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.094153 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.112294 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.143972 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.247983 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.351257 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.429764 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:47:34.700647307 +0000 UTC Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.454345 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.462647 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:55 crc kubenswrapper[4869]: E0202 14:33:55.462835 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.557603 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.660503 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.763684 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.847752 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/0.log" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.850932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.851031 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.866382 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.867507 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.885905 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.899816 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.915941 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.931719 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.947102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.965236 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.969712 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:55Z","lastTransitionTime":"2026-02-02T14:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:55 crc kubenswrapper[4869]: I0202 14:33:55.990314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:55Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.006588 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.025139 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.037936 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.052926 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.066763 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.072252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.091365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.175764 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.279596 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.383374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.430991 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:38:40.912780106 +0000 UTC Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.462674 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.462749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:56 crc kubenswrapper[4869]: E0202 14:33:56.462854 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:56 crc kubenswrapper[4869]: E0202 14:33:56.463018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.486295 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.588951 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.691949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.691989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.691999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.692015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.692030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.796282 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.859430 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.860199 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/0.log" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.865063 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" exitCode=1 Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.865145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.865278 4869 scope.go:117] "RemoveContainer" containerID="6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.866117 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:33:56 crc kubenswrapper[4869]: E0202 14:33:56.866351 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.888221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.898898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.899011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:56Z","lastTransitionTime":"2026-02-02T14:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.901181 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.918138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.936948 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.953545 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.970815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:56 crc kubenswrapper[4869]: I0202 14:33:56.988829 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:56Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.001826 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.011456 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.025890 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.041943 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.056492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.073027 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.088150 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.104961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.105142 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.115257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.179667 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx"] Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.180269 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.185252 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.185521 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.200952 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.208145 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.213750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.227705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.239962 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.254888 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.266899 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.278893 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.296166 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfznq\" (UniqueName: \"kubernetes.io/projected/7087ae0f-5f9b-4da3-8081-6417819b70e8-kube-api-access-lfznq\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.304221 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.311725 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.312930 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.330794 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.354651 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.369973 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.393660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.404904 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.405023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.405051 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.405087 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfznq\" (UniqueName: \"kubernetes.io/projected/7087ae0f-5f9b-4da3-8081-6417819b70e8-kube-api-access-lfznq\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.406108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.406791 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.411698 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.413660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7087ae0f-5f9b-4da3-8081-6417819b70e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.415927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.416427 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.425139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfznq\" (UniqueName: \"kubernetes.io/projected/7087ae0f-5f9b-4da3-8081-6417819b70e8-kube-api-access-lfznq\") pod \"ovnkube-control-plane-749d76644c-4zdpx\" (UID: \"7087ae0f-5f9b-4da3-8081-6417819b70e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.431822 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:26:15.67973754 +0000 UTC Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.434311 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.462003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:57 crc kubenswrapper[4869]: E0202 14:33:57.462578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.501327 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" Feb 02 14:33:57 crc kubenswrapper[4869]: W0202 14:33:57.519092 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7087ae0f_5f9b_4da3_8081_6417819b70e8.slice/crio-e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e WatchSource:0}: Error finding container e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e: Status 404 returned error can't find the container with id e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.519425 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.621968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.724453 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.828325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.872081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" event={"ID":"7087ae0f-5f9b-4da3-8081-6417819b70e8","Type":"ContainerStarted","Data":"1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.872141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" event={"ID":"7087ae0f-5f9b-4da3-8081-6417819b70e8","Type":"ContainerStarted","Data":"41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.872151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" event={"ID":"7087ae0f-5f9b-4da3-8081-6417819b70e8","Type":"ContainerStarted","Data":"e570c4326962edbf305b2c0bc39ac3596f4f9dc66a57aab3dab5ce917dfae14e"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.875684 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.880398 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:33:57 crc kubenswrapper[4869]: E0202 14:33:57.880715 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.892023 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.910267 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.928192 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.931113 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:57Z","lastTransitionTime":"2026-02-02T14:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.956559 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.976037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:57 crc kubenswrapper[4869]: I0202 14:33:57.991548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:57Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.008326 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.022622 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.034586 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.038410 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.052420 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.066976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.081517 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.097225 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.112536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.133588 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e80abc9bdd241713a93264ff0054f87acf8e03433940c23bc5113bbe3f446c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:54Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 14:33:54.432335 6137 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 14:33:54.432357 6137 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 14:33:54.432379 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 14:33:54.432384 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 14:33:54.432411 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:54.432431 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:54.432436 6137 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 14:33:54.432463 6137 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 14:33:54.432460 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 14:33:54.432466 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:54.432487 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:54.432489 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 14:33:54.432501 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:54.432516 6137 factory.go:656] Stopping watch factory\\\\nI0202 14:33:54.432537 6137 ovnkube.go:599] Stopped ovnkube\\\\nI0202 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.138096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.150858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.165175 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.181347 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.198300 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.212676 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.228164 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.240806 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.249624 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.261688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.278703 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.280370 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qx2qt"] Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.281148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.281230 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.295061 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.311655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.328812 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.344264 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.359784 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.373791 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.390139 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.405249 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416342 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416457 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416518 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416614 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416628 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.416585233 +0000 UTC m=+56.061222003 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416687 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.416664205 +0000 UTC m=+56.061301205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fp98\" (UniqueName: \"kubernetes.io/projected/0b597927-2943-4e1a-bac5-1266d539e8f8-kube-api-access-2fp98\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.416888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416755 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.416981 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417005 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417051 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417083 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.417068815 +0000 UTC m=+56.061705815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417103 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.417094655 +0000 UTC m=+56.061731425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417142 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417174 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417195 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.417274 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.417254019 +0000 UTC m=+56.061890979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.420492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.432881 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:15:59.353825783 +0000 UTC Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.435899 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453569 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.453604 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.461617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.461776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.461967 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.462109 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.466605 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.485494 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.497868 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.512815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.517685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fp98\" (UniqueName: \"kubernetes.io/projected/0b597927-2943-4e1a-bac5-1266d539e8f8-kube-api-access-2fp98\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.517750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.517930 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: E0202 14:33:58.518002 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:33:59.017979301 +0000 UTC m=+40.662616091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.526620 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.533997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fp98\" (UniqueName: \"kubernetes.io/projected/0b597927-2943-4e1a-bac5-1266d539e8f8-kube-api-access-2fp98\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.538216 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.553389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.556373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.573409 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.587417 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.606562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.619254 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:58Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.659406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.762480 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.865165 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:58 crc kubenswrapper[4869]: I0202 14:33:58.967276 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:58Z","lastTransitionTime":"2026-02-02T14:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.022118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:33:59 crc kubenswrapper[4869]: E0202 14:33:59.022305 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:59 crc kubenswrapper[4869]: E0202 14:33:59.022390 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:00.022366288 +0000 UTC m=+41.667003068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.071134 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.174140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.277430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.380857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.381409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.433075 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:29:43.865150132 +0000 UTC Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.463181 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:33:59 crc kubenswrapper[4869]: E0202 14:33:59.463376 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.463446 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.477761 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.483970 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.497541 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.514672 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.533315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.549165 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.563610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.578881 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.586552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.597620 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.613188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.630376 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.647447 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.659697 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.678441 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.688991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.689004 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.690599 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.706710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.721830 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.792102 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.889072 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.890868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.891874 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.894538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.906521 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.921633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.932996 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.947383 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.959760 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.976120 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:33:59 crc kubenswrapper[4869]: I0202 14:33:59.999248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:33:59Z","lastTransitionTime":"2026-02-02T14:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.003716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:33:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.018867 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.035824 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.035944 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:02.035902224 +0000 UTC m=+43.680538994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.035661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.037213 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.054184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.072986 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.094073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.102341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.102821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.102895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.103025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.103097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.111173 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.127425 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.144225 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.158548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:00Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.206822 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.309820 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.412879 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.434275 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:37:47.22688946 +0000 UTC Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.462608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.462761 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.463253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.463339 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.463403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:00 crc kubenswrapper[4869]: E0202 14:34:00.463472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.515724 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.618930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.722511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.825285 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:00 crc kubenswrapper[4869]: I0202 14:34:00.928122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:00Z","lastTransitionTime":"2026-02-02T14:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.031420 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.134819 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.237971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.238072 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.341713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.434902 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:37:52.81091372 +0000 UTC Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.444655 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.462605 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:01 crc kubenswrapper[4869]: E0202 14:34:01.462838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.548821 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.651640 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.754544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.857256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:01 crc kubenswrapper[4869]: I0202 14:34:01.959728 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:01Z","lastTransitionTime":"2026-02-02T14:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.058139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.058339 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.058418 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:06.05839793 +0000 UTC m=+47.703034700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.062933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.062980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.062992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.063011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.063025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.165900 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.268875 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.371991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.372006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.435677 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:09:11.506385089 +0000 UTC Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.461988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.462014 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.462038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.462176 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.462305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.462497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.475135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.578360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.681650 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.785138 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.874348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.875254 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:34:02 crc kubenswrapper[4869]: E0202 14:34:02.875441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.888347 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:02 crc kubenswrapper[4869]: I0202 14:34:02.991451 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:02Z","lastTransitionTime":"2026-02-02T14:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.094536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.146690 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.161296 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.166630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.184099 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.188738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.207558 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.213666 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.228586 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.233710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.251712 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:03Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.251863 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.254486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.357930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.436734 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:15:32.396271304 +0000 UTC Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.461450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.462026 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:03 crc kubenswrapper[4869]: E0202 14:34:03.462153 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.564544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.667530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.770998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.873740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:03 crc kubenswrapper[4869]: I0202 14:34:03.977122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:03Z","lastTransitionTime":"2026-02-02T14:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.079949 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.182939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.183094 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.286203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.389328 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.437315 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:26:02.077379173 +0000 UTC Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.461724 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.461773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.461802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:04 crc kubenswrapper[4869]: E0202 14:34:04.461955 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:04 crc kubenswrapper[4869]: E0202 14:34:04.462173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:04 crc kubenswrapper[4869]: E0202 14:34:04.462094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.492640 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.595176 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.698889 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.801528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:04 crc kubenswrapper[4869]: I0202 14:34:04.904735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:04Z","lastTransitionTime":"2026-02-02T14:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.008453 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.111622 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.214322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.317397 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.421856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.421984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.422029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.422053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.422069 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.438460 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:34:54.945524329 +0000 UTC Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.462071 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:05 crc kubenswrapper[4869]: E0202 14:34:05.462260 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.530498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.633989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.634112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.737285 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.839984 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:05 crc kubenswrapper[4869]: I0202 14:34:05.943279 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:05Z","lastTransitionTime":"2026-02-02T14:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.046180 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.106806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.107108 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.107229 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:14.107198696 +0000 UTC m=+55.751835466 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.149402 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.252272 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.354887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.438897 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:56:11.227123977 +0000 UTC Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.457556 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.462715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.462805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.462702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.462867 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.463020 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:06 crc kubenswrapper[4869]: E0202 14:34:06.463126 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.560901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.560996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.561027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.561048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.561064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.669741 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.774481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.877716 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:06 crc kubenswrapper[4869]: I0202 14:34:06.980684 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:06Z","lastTransitionTime":"2026-02-02T14:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.083531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.186251 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.291993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.292008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.395554 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.439374 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:44:27.096066338 +0000 UTC Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.462070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:07 crc kubenswrapper[4869]: E0202 14:34:07.462275 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.498227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.601977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.602093 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.704962 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.794563 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.807661 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.810273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.816944 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.843467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.857829 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.872176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.886013 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.903505 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.922602 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.926786 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:07Z","lastTransitionTime":"2026-02-02T14:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.938243 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.952592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.966780 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:07 crc kubenswrapper[4869]: I0202 14:34:07.987102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.001082 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:07Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.018652 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.029618 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.036791 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.050380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.069363 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:08Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.132099 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.235343 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.338739 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.439509 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:58:50.070035451 +0000 UTC Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.441543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.463743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.463814 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:08 crc kubenswrapper[4869]: E0202 14:34:08.463889 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:08 crc kubenswrapper[4869]: E0202 14:34:08.463986 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.464544 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:08 crc kubenswrapper[4869]: E0202 14:34:08.464768 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.544939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.545674 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.648814 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.751404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.854978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.855317 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:08 crc kubenswrapper[4869]: I0202 14:34:08.964559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:08Z","lastTransitionTime":"2026-02-02T14:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.067867 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.171973 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.275254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.378494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.440179 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:15:26.550951485 +0000 UTC Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.461787 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:09 crc kubenswrapper[4869]: E0202 14:34:09.461980 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.479957 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.480873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.480988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.481015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.481045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.481062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.495506 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.516956 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.534692 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.555211 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.573850 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.583835 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.587740 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.600805 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.617274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.631487 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.645998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.662115 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.675460 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.689193 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.691131 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.706655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.720019 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.740086 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:09Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.792678 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.895925 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:09 crc kubenswrapper[4869]: I0202 14:34:09.999373 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:09Z","lastTransitionTime":"2026-02-02T14:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.101961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.102048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.204941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.204990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.204999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.205011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.205021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.307993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.411830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.441162 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:44:30.181656156 +0000 UTC Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.461676 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.461778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:10 crc kubenswrapper[4869]: E0202 14:34:10.461831 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:10 crc kubenswrapper[4869]: E0202 14:34:10.462000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.461792 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:10 crc kubenswrapper[4869]: E0202 14:34:10.462123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.516111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.627800 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.730548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.834223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.911892 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.932972 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.937673 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:10Z","lastTransitionTime":"2026-02-02T14:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.947575 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.961308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.977643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:10 crc kubenswrapper[4869]: I0202 14:34:10.990964 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:10Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.003813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.020721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.040667 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.042308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.054468 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.069882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.089746 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.101425 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.121856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.142589 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.142870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.142989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.143008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.143035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.143052 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.159482 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.173079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.188422 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:11Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.246292 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.349324 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.441379 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:45:37.551097107 +0000 UTC Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.451979 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.462224 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:11 crc kubenswrapper[4869]: E0202 14:34:11.462419 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.554590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.657547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.760246 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.863386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:11 crc kubenswrapper[4869]: I0202 14:34:11.966219 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:11Z","lastTransitionTime":"2026-02-02T14:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.068936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.068998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.069016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.069040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.069059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.171967 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.275322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.378976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.379115 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.441892 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:58:43.759919653 +0000 UTC Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.462291 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.462337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.462367 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:12 crc kubenswrapper[4869]: E0202 14:34:12.462432 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:12 crc kubenswrapper[4869]: E0202 14:34:12.462546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:12 crc kubenswrapper[4869]: E0202 14:34:12.462660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.482841 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.585297 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.688544 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.791974 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.895298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:12 crc kubenswrapper[4869]: I0202 14:34:12.998306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:12Z","lastTransitionTime":"2026-02-02T14:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.101987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.102024 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.205188 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.302260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.321587 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.327631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.348953 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.354658 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.371611 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.376447 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.397686 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.403706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.427349 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:13Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.427626 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.430464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.443014 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:03:26.601457629 +0000 UTC Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.462612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:13 crc kubenswrapper[4869]: E0202 14:34:13.462825 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.533713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.637588 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.739802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.843255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:13 crc kubenswrapper[4869]: I0202 14:34:13.946247 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:13Z","lastTransitionTime":"2026-02-02T14:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.049393 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.142020 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.142172 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.142231 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:30.142218642 +0000 UTC m=+71.786855412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.152642 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.256630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.359586 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.443646 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:52:13.157379583 +0000 UTC Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.445677 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.445641658 +0000 UTC m=+88.090278478 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.445787 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.445874 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.445845254 +0000 UTC m=+88.090482024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.445932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446037 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446072 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.446064389 +0000 UTC m=+88.090701159 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446119 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446161 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446188 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446200 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446266 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446287 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.446263214 +0000 UTC m=+88.090900024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446295 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.446409 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:34:46.446374357 +0000 UTC m=+88.091011287 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.461593 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.461740 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.462180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.462271 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.462378 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:14 crc kubenswrapper[4869]: E0202 14:34:14.462466 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.463306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.566504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.669237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.773323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.876512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:14 crc kubenswrapper[4869]: I0202 14:34:14.978945 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:14Z","lastTransitionTime":"2026-02-02T14:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.081962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.081998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.082006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.082020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.082030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.184809 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.286953 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.390593 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.443922 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:41:08.572963372 +0000 UTC Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.462643 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:15 crc kubenswrapper[4869]: E0202 14:34:15.462834 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.492804 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.595643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.698510 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.801968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:15 crc kubenswrapper[4869]: I0202 14:34:15.904209 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:15Z","lastTransitionTime":"2026-02-02T14:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.006716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.007188 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.109298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.213344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.316470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.419983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.420115 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.444494 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:45:10.918340499 +0000 UTC Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.461973 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.462016 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.461973 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:16 crc kubenswrapper[4869]: E0202 14:34:16.462187 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:16 crc kubenswrapper[4869]: E0202 14:34:16.462108 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:16 crc kubenswrapper[4869]: E0202 14:34:16.462292 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.522854 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.625716 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.727855 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.830490 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.932955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.932986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.932995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.933008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:16 crc kubenswrapper[4869]: I0202 14:34:16.933016 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:16Z","lastTransitionTime":"2026-02-02T14:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.036534 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.139701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.242869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.242974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.243000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.243042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.243061 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.353835 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.445363 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:19:32.909034923 +0000 UTC Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.457604 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.462073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:17 crc kubenswrapper[4869]: E0202 14:34:17.462237 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.463483 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.560970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.561074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.663384 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.766591 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.872941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.872987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.872999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.873016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.873029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.968114 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.971958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.972544 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.976199 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:17Z","lastTransitionTime":"2026-02-02T14:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:17 crc kubenswrapper[4869]: I0202 14:34:17.993633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:17Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.018555 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.031674 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.044759 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.056623 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.067738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.078747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.087659 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.100966 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.112343 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.128533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.143070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.156312 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.172666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.180869 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.185365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.198430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.213145 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.226006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.283733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.386984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.387064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.446329 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:01:50.3857065 +0000 UTC Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.461755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.461930 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.461984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.462168 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.462397 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.462596 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.489353 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.591273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.591726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.591969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.592127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.592267 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.695180 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.797971 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.900309 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:18Z","lastTransitionTime":"2026-02-02T14:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.978525 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.979091 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/1.log" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.980964 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" exitCode=1 Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.981001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46"} Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.981033 4869 scope.go:117] "RemoveContainer" containerID="05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.981786 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:18 crc kubenswrapper[4869]: E0202 14:34:18.981940 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:18 crc kubenswrapper[4869]: I0202 14:34:18.997189 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.008788 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.011021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.027834 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.063506 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.073943 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.087207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.101168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.111629 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.124082 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.136857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.150297 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.164757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.179711 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.193424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.207550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.215880 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.219068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.230102 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.318494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.421813 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.446719 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:29:05.933843128 +0000 UTC Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.461706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:19 crc kubenswrapper[4869]: E0202 14:34:19.462609 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.485674 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.500054 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.517743 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.525147 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.533868 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.547882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.559899 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.571901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.586021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.598572 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.613494 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05bbc476d48cab44dd16b75582a59548df25652a0c1a0389d6ee4948f76a68e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:33:56Z\\\",\\\"message\\\":\\\": 1.074248ms\\\\nI0202 14:33:55.876356 6302 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876407 6302 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 14:33:55.876439 6302 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 14:33:55.876445 6302 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 14:33:55.876482 6302 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 14:33:55.876488 6302 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0202 14:33:55.876505 6302 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 14:33:55.876519 6302 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 14:33:55.876526 6302 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 14:33:55.876526 6302 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 14:33:55.876536 6302 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 14:33:55.876551 6302 factory.go:656] Stopping watch factory\\\\nI0202 14:33:55.876569 6302 ovnkube.go:599] Stopped ovnkube\\\\nI0202 14:33:55.876594 6302 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 14:33:55.876610 6302 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 14:33:55.876757 6302 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.639998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.653556 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.673279 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.685669 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.700091 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.715060 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.728457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:19Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.745727 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.848695 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.951751 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:19Z","lastTransitionTime":"2026-02-02T14:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.986968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:34:19 crc kubenswrapper[4869]: I0202 14:34:19.990785 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:19 crc kubenswrapper[4869]: E0202 14:34:19.991043 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.002392 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.017274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.028322 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.046480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.055298 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.057353 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.071168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.085463 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.101550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.121030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.136405 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.151986 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.158326 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.170188 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.187035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.201000 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.213341 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.224232 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.237614 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:20Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.261828 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.364168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.446998 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:55:52.423337191 +0000 UTC Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.462433 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.462508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.462487 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:20 crc kubenswrapper[4869]: E0202 14:34:20.462663 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:20 crc kubenswrapper[4869]: E0202 14:34:20.462806 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:20 crc kubenswrapper[4869]: E0202 14:34:20.462885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.466875 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.569834 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.673352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.776325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.879370 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:20 crc kubenswrapper[4869]: I0202 14:34:20.982559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:20Z","lastTransitionTime":"2026-02-02T14:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.090566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.091356 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.195304 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.298721 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.402468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.447440 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:07:09.229156177 +0000 UTC Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.462145 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:21 crc kubenswrapper[4869]: E0202 14:34:21.462419 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.506451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.506844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.506954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.507047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.507136 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.609977 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.712989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.713006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.816844 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:21 crc kubenswrapper[4869]: I0202 14:34:21.920245 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:21Z","lastTransitionTime":"2026-02-02T14:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.023379 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.126530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.247434 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.350349 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.448218 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:08:36.385090318 +0000 UTC Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.453288 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.461617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.461682 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:22 crc kubenswrapper[4869]: E0202 14:34:22.461765 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:22 crc kubenswrapper[4869]: E0202 14:34:22.461895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.461629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:22 crc kubenswrapper[4869]: E0202 14:34:22.462001 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.557215 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.661293 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.764543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.867893 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:22 crc kubenswrapper[4869]: I0202 14:34:22.970561 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:22Z","lastTransitionTime":"2026-02-02T14:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.073700 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.178670 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.281374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.383756 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.448987 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:41:06.469432004 +0000 UTC Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.462652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.462816 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.469706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.494691 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.500959 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.515612 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.520788 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.533792 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.537735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.550166 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.554380 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.566977 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:23Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:23 crc kubenswrapper[4869]: E0202 14:34:23.567162 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.570637 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.673526 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.776867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.776969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.776989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.777023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.777048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.880194 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:23 crc kubenswrapper[4869]: I0202 14:34:23.982450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:23Z","lastTransitionTime":"2026-02-02T14:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.085537 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.188262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.291252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.393521 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.449320 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:14:29.205079694 +0000 UTC Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.461940 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.462020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.461943 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:24 crc kubenswrapper[4869]: E0202 14:34:24.462072 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:24 crc kubenswrapper[4869]: E0202 14:34:24.462192 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:24 crc kubenswrapper[4869]: E0202 14:34:24.462276 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.495822 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.598953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.599065 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.702617 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.805462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:24 crc kubenswrapper[4869]: I0202 14:34:24.908685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:24Z","lastTransitionTime":"2026-02-02T14:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.010971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.011101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.113413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.216753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.217386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.319934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.319985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.319998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.320014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.320025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.422622 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.450345 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:49:12.042854659 +0000 UTC Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.461898 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:25 crc kubenswrapper[4869]: E0202 14:34:25.462225 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.525096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.628327 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.730843 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.833714 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.936945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:25 crc kubenswrapper[4869]: I0202 14:34:25.937081 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:25Z","lastTransitionTime":"2026-02-02T14:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.039511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.141144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.243691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.346178 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.449174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.451423 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:17:27.867635656 +0000 UTC Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.461740 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.461754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:26 crc kubenswrapper[4869]: E0202 14:34:26.461944 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.461754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:26 crc kubenswrapper[4869]: E0202 14:34:26.462052 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:26 crc kubenswrapper[4869]: E0202 14:34:26.462091 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.551495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.654225 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.756488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.858632 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.960864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.960973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.960991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.961014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:26 crc kubenswrapper[4869]: I0202 14:34:26.961031 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:26Z","lastTransitionTime":"2026-02-02T14:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.063336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.165771 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.267853 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.370381 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.452186 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:30:47.344048317 +0000 UTC Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.462591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:27 crc kubenswrapper[4869]: E0202 14:34:27.462756 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.472997 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.575766 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.686997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.687006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.790132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.892686 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:27 crc kubenswrapper[4869]: I0202 14:34:27.995573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:27Z","lastTransitionTime":"2026-02-02T14:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.098717 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.201348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.304451 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.406777 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.453203 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:17:21.437611492 +0000 UTC Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.462696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.462776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:28 crc kubenswrapper[4869]: E0202 14:34:28.462856 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.462779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:28 crc kubenswrapper[4869]: E0202 14:34:28.462943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:28 crc kubenswrapper[4869]: E0202 14:34:28.463094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.509539 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.615499 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.718546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.821255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:28 crc kubenswrapper[4869]: I0202 14:34:28.924120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:28Z","lastTransitionTime":"2026-02-02T14:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.026945 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.131166 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.233846 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.336733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.440643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.453823 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:41:27.259380519 +0000 UTC Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.462767 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:29 crc kubenswrapper[4869]: E0202 14:34:29.462954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.478993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.490648 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.502793 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.521581 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.532549 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.544186 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.546067 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.562399 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.576473 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.595455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.614441 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.634268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.646950 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.649053 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.663556 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.677393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.690428 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.699821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.712303 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:29Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.749560 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.852086 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:29 crc kubenswrapper[4869]: I0202 14:34:29.955721 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:29Z","lastTransitionTime":"2026-02-02T14:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.060833 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.165634 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.199683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.199877 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.199990 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:02.19997051 +0000 UTC m=+103.844607280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.269483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.372845 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.454862 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:56:05.397620587 +0000 UTC Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.462231 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.462381 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.462493 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.462504 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.462625 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:30 crc kubenswrapper[4869]: E0202 14:34:30.462686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.475676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.578071 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.680974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.681056 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.784223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.887132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:30 crc kubenswrapper[4869]: I0202 14:34:30.990568 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:30Z","lastTransitionTime":"2026-02-02T14:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.092882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.195676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.298802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.402184 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.455990 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:25:58.270244649 +0000 UTC Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.462458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:31 crc kubenswrapper[4869]: E0202 14:34:31.462629 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.504875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.505105 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.608770 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.711659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.814289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:31 crc kubenswrapper[4869]: I0202 14:34:31.917533 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:31Z","lastTransitionTime":"2026-02-02T14:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.020964 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.033260 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/0.log" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.033310 4869 generic.go:334] "Generic (PLEG): container finished" podID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" containerID="b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9" exitCode=1 Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.033344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerDied","Data":"b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.034196 4869 scope.go:117] "RemoveContainer" containerID="b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.047878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.061085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.073903 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.089818 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.103544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.121276 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.123773 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.134737 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.146664 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.157342 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.170659 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.182841 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.196424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.209526 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.222270 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.226230 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.237458 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.259942 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.274995 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:32Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.329189 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.431796 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.456221 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:53:43.047334506 +0000 UTC Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.462667 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.462684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.462710 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:32 crc kubenswrapper[4869]: E0202 14:34:32.462895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:32 crc kubenswrapper[4869]: E0202 14:34:32.462989 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:32 crc kubenswrapper[4869]: E0202 14:34:32.463051 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.535527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.642737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.746597 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.851342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:32 crc kubenswrapper[4869]: I0202 14:34:32.954744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:32Z","lastTransitionTime":"2026-02-02T14:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.040196 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/0.log" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.040352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.057340 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.058606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.073921 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.090565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.109308 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.125014 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.139619 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.154261 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.162280 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.170324 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.185832 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.200070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.223828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.239661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.254183 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.265576 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.269343 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.282854 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.299221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.316274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.372781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.373733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.457044 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:32:29.628145708 +0000 UTC Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.462452 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.462620 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.463336 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.463534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.476770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.477521 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.580310 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.616462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.634465 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.640260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.658352 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.663634 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.682000 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.686357 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.702326 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.707344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.721049 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:33Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:33 crc kubenswrapper[4869]: E0202 14:34:33.721752 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.724737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.828331 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:33 crc kubenswrapper[4869]: I0202 14:34:33.932232 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:33Z","lastTransitionTime":"2026-02-02T14:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.034946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.034995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.035013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.035034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.035047 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.138200 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.240629 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.350417 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.454241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.458232 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:09:51.173278978 +0000 UTC Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.462669 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.462669 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.462818 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:34 crc kubenswrapper[4869]: E0202 14:34:34.462998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:34 crc kubenswrapper[4869]: E0202 14:34:34.463127 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:34 crc kubenswrapper[4869]: E0202 14:34:34.463254 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.556940 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.660592 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.762976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.763132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.865411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:34 crc kubenswrapper[4869]: I0202 14:34:34.969511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:34Z","lastTransitionTime":"2026-02-02T14:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.072481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.176529 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.280190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.383241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.458805 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:37:37.528422065 +0000 UTC Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.464198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:35 crc kubenswrapper[4869]: E0202 14:34:35.464317 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.524192 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.626899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.626970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.626984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.627000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.627011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.730101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.832239 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:35 crc kubenswrapper[4869]: I0202 14:34:35.935218 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:35Z","lastTransitionTime":"2026-02-02T14:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.038125 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.141598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.245386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.348976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.349054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.451992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.452054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.459801 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:30:25.541871684 +0000 UTC Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.462201 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.462302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:36 crc kubenswrapper[4869]: E0202 14:34:36.462492 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:36 crc kubenswrapper[4869]: E0202 14:34:36.462319 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.462533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:36 crc kubenswrapper[4869]: E0202 14:34:36.462662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.554755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.657525 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.761743 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.865136 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:36 crc kubenswrapper[4869]: I0202 14:34:36.967776 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:36Z","lastTransitionTime":"2026-02-02T14:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.070864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.174797 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.277728 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.380878 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.460749 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:31:10.677485926 +0000 UTC Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.462118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:37 crc kubenswrapper[4869]: E0202 14:34:37.462277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.483832 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.587265 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.689659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.792787 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.896141 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:37 crc kubenswrapper[4869]: I0202 14:34:37.999808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:37.999954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:37.999975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.000005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.000026 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:37Z","lastTransitionTime":"2026-02-02T14:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.103399 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.206442 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.309998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.413096 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.460948 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:51:08.2786379 +0000 UTC Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.462264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.462343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:38 crc kubenswrapper[4869]: E0202 14:34:38.462462 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.462482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:38 crc kubenswrapper[4869]: E0202 14:34:38.462635 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:38 crc kubenswrapper[4869]: E0202 14:34:38.462770 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.516631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.619466 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.722386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.824968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:38 crc kubenswrapper[4869]: I0202 14:34:38.927646 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:38Z","lastTransitionTime":"2026-02-02T14:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.030685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.133167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.237738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.341250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.445218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.445983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.446093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.446189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.446281 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.461749 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:20:51.018266935 +0000 UTC Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.463330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:39 crc kubenswrapper[4869]: E0202 14:34:39.463630 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.481404 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.484393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.504350 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.520146 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.535152 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.548738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.549193 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.564201 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.577621 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.604524 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.619648 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.635190 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.650484 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.651520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.661825 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.677935 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.690882 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.704475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.718435 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.731476 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:39Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.754278 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.856665 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:39 crc kubenswrapper[4869]: I0202 14:34:39.959488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:39Z","lastTransitionTime":"2026-02-02T14:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.062237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.166392 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.269346 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.372183 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462266 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:32:48.419955988 +0000 UTC Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462417 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462493 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.462427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:40 crc kubenswrapper[4869]: E0202 14:34:40.462551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:40 crc kubenswrapper[4869]: E0202 14:34:40.462640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:40 crc kubenswrapper[4869]: E0202 14:34:40.462745 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.474639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:40 crc kubenswrapper[4869]: I0202 14:34:40.578371 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:40Z","lastTransitionTime":"2026-02-02T14:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.398986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.399162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.462223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:41 crc kubenswrapper[4869]: E0202 14:34:41.462442 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.462521 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:39:14.026302732 +0000 UTC Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.501631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.605647 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.708449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.813260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:41 crc kubenswrapper[4869]: I0202 14:34:41.917155 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:41Z","lastTransitionTime":"2026-02-02T14:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.020689 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.123301 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.225729 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.328234 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.431102 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.461773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.461811 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:42 crc kubenswrapper[4869]: E0202 14:34:42.461893 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.461773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:42 crc kubenswrapper[4869]: E0202 14:34:42.462023 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:42 crc kubenswrapper[4869]: E0202 14:34:42.462534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.462604 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:52:45.27814978 +0000 UTC Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.534234 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.637486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.741573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.844404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:42 crc kubenswrapper[4869]: I0202 14:34:42.948395 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:42Z","lastTransitionTime":"2026-02-02T14:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.050949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.051033 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.153615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.256832 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.359988 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462037 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:43 crc kubenswrapper[4869]: E0202 14:34:43.462177 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462516 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.462688 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:37:43.219518513 +0000 UTC Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.564860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.564994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.565014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.565039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.565057 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.668543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.772364 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.875847 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.965106 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: E0202 14:34:43.980151 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:43 crc kubenswrapper[4869]: I0202 14:34:43.984594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:43Z","lastTransitionTime":"2026-02-02T14:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:43 crc kubenswrapper[4869]: E0202 14:34:43.998968 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:43Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.003549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.016649 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.021360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.039011 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.044242 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.059876 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:44Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.060016 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.061731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.165127 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.267475 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.370707 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.462366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.462546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.462774 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:29:00.012370873 +0000 UTC Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.462864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.462970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.463143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:44 crc kubenswrapper[4869]: E0202 14:34:44.463224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.473551 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.576421 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.679495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.782973 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.886550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.989891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:44 crc kubenswrapper[4869]: I0202 14:34:44.990149 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:44Z","lastTransitionTime":"2026-02-02T14:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.092902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.092970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.092986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.093006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.093019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.195864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.298615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.400790 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.462510 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.463305 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:24:26.04400733 +0000 UTC Feb 02 14:34:45 crc kubenswrapper[4869]: E0202 14:34:45.464050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.466121 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.488853 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.503992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.504445 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.609236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.712634 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.815144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:45 crc kubenswrapper[4869]: I0202 14:34:45.917894 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:45Z","lastTransitionTime":"2026-02-02T14:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.020563 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.123415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.226184 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.328990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.329001 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.417349 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.420496 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.421448 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.432546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.446253 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461695 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.461779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.461872 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.462043 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.462104 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.463656 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:07:43.563524662 +0000 UTC Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.469549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470180 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.470142383 +0000 UTC m=+152.114779153 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470402 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470425 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470455 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470520 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.470494952 +0000 UTC m=+152.115131722 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470657 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470730 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.470705138 +0000 UTC m=+152.115342098 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470753 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.470797 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.47078451 +0000 UTC m=+152.115421510 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471196 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471250 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471264 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: E0202 14:34:46.471352 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:35:50.471329083 +0000 UTC m=+152.115966033 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.476288 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.493655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65804f76-1783-4c7e-b1b2-c8b08c84615f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.510522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.524828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.535595 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.540252 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.553843 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.566997 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.587028 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.613522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.630450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.639325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.646687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.664892 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.677968 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.697533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ae4835-4a7a-4f35-9a26-1b652269688f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.710895 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.730342 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.742857 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.745714 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:46Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.846489 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:46 crc kubenswrapper[4869]: I0202 14:34:46.950154 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:46Z","lastTransitionTime":"2026-02-02T14:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.053691 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.156571 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.259540 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.362581 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.462033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:47 crc kubenswrapper[4869]: E0202 14:34:47.462325 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.463831 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:13:45.076373692 +0000 UTC Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.465411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.568978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.569352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.672465 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.775097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.878150 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:47 crc kubenswrapper[4869]: I0202 14:34:47.982365 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:47Z","lastTransitionTime":"2026-02-02T14:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.085566 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.188639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.291379 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.394120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.462541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:48 crc kubenswrapper[4869]: E0202 14:34:48.462685 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.462754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:48 crc kubenswrapper[4869]: E0202 14:34:48.462811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.462858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:48 crc kubenswrapper[4869]: E0202 14:34:48.462964 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.464226 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:45:35.836618439 +0000 UTC Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.496762 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.600220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.703538 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.807522 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.910905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.910978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.910988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.911006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:48 crc kubenswrapper[4869]: I0202 14:34:48.911020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:48Z","lastTransitionTime":"2026-02-02T14:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.014517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.117270 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.221594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.324217 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.427559 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.462316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:49 crc kubenswrapper[4869]: E0202 14:34:49.462458 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.464620 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:07:26.817158798 +0000 UTC Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.480616 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.498184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.512095 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.524500 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.532618 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.546604 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ae4835-4a7a-4f35-9a26-1b652269688f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.556403 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65804f76-1783-4c7e-b1b2-c8b08c84615f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.571266 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.587331 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.609937 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.622576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.634898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635149 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.635250 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.648590 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.664807 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.682642 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.696771 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.714414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.728636 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.737568 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.743396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.756939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:49Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.840842 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:49 crc kubenswrapper[4869]: I0202 14:34:49.943763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:49Z","lastTransitionTime":"2026-02-02T14:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.045968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.046130 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.149815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.253638 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356295 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.356308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.460217 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.462391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.462435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.462403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:50 crc kubenswrapper[4869]: E0202 14:34:50.462563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:50 crc kubenswrapper[4869]: E0202 14:34:50.462692 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:50 crc kubenswrapper[4869]: E0202 14:34:50.462741 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.465532 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:42:26.11356667 +0000 UTC Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.563657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.563731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.563746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.564110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.564144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.666564 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.769993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.871900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.871998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.872015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.872038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.872057 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:50 crc kubenswrapper[4869]: I0202 14:34:50.975152 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:50Z","lastTransitionTime":"2026-02-02T14:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.078406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.181489 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.284931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.285074 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.388642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.389216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.462412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:51 crc kubenswrapper[4869]: E0202 14:34:51.462611 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.466534 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:10:52.838350978 +0000 UTC Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.492440 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.595754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.596481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.699223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.699623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.699817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.700112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.700341 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.803872 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:51 crc kubenswrapper[4869]: I0202 14:34:51.906256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:51Z","lastTransitionTime":"2026-02-02T14:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.008811 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.111156 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.213332 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.316165 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.419882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.419978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.420000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.420023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.420041 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.462645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.462736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:52 crc kubenswrapper[4869]: E0202 14:34:52.462877 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.462994 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:52 crc kubenswrapper[4869]: E0202 14:34:52.463086 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:52 crc kubenswrapper[4869]: E0202 14:34:52.463244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.467002 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:34:09.311906316 +0000 UTC Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.522548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.624665 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.727648 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.831424 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:52 crc kubenswrapper[4869]: I0202 14:34:52.934082 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:52Z","lastTransitionTime":"2026-02-02T14:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.037504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.140254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.243254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.346786 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.449683 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.461701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:53 crc kubenswrapper[4869]: E0202 14:34:53.462015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.467170 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:46:00.544050706 +0000 UTC Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.552823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.553468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.553687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.553901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.554426 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.658155 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.761659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.864576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.865239 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:53 crc kubenswrapper[4869]: I0202 14:34:53.968415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:53Z","lastTransitionTime":"2026-02-02T14:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.071656 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.175700 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.270427 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.294041 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.298978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.299085 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.315227 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.321874 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.335686 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.340133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.352005 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.358212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.372220 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1c099235-d602-4e51-9f67-7e55e0b34cd4\\\",\\\"systemUUID\\\":\\\"0aa343f6-2c18-4e4e-b19b-25e42d92b529\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:54Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.372420 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.375590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.376437 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.462109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.462210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.462143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.462678 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.462819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:54 crc kubenswrapper[4869]: E0202 14:34:54.463000 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.467482 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:03:12.077957872 +0000 UTC Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.479972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.480105 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.584212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.686701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.789464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.893344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:54 crc kubenswrapper[4869]: I0202 14:34:54.996627 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:54Z","lastTransitionTime":"2026-02-02T14:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.100311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.203518 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.305779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.305832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.305849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.306246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.306267 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.408598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.462694 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:55 crc kubenswrapper[4869]: E0202 14:34:55.462949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.467629 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 08:26:05.709142723 +0000 UTC Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.512675 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.615509 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.718533 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.821825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.821953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.821982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.822007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.822024 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:55 crc kubenswrapper[4869]: I0202 14:34:55.925349 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:55Z","lastTransitionTime":"2026-02-02T14:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.028531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.131576 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.240884 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.343898 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.446803 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.462184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.462210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.462776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:56 crc kubenswrapper[4869]: E0202 14:34:56.463010 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:56 crc kubenswrapper[4869]: E0202 14:34:56.463262 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:56 crc kubenswrapper[4869]: E0202 14:34:56.463357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.468717 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:16:25.036134343 +0000 UTC Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.550520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.653351 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.756861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.757167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.860515 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:56 crc kubenswrapper[4869]: I0202 14:34:56.963593 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:56Z","lastTransitionTime":"2026-02-02T14:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.065876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.065961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.065981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.066033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.066054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.168976 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.271581 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.373959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.374059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.461697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:57 crc kubenswrapper[4869]: E0202 14:34:57.461902 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.469751 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:44:08.527890612 +0000 UTC Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.476969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.477059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.580169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.682882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.786351 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.888970 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:57 crc kubenswrapper[4869]: I0202 14:34:57.991109 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:57Z","lastTransitionTime":"2026-02-02T14:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.094496 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.198117 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.301855 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.404767 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.462064 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.462109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:34:58 crc kubenswrapper[4869]: E0202 14:34:58.462238 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.462328 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:34:58 crc kubenswrapper[4869]: E0202 14:34:58.462480 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:34:58 crc kubenswrapper[4869]: E0202 14:34:58.462597 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.470790 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:08:05.264400504 +0000 UTC Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.507713 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.610842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.611894 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.714884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.715487 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.817714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.818353 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.921988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:58 crc kubenswrapper[4869]: I0202 14:34:58.922015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:58Z","lastTransitionTime":"2026-02-02T14:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.024540 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.127299 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.229413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.332207 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.435363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.462337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:34:59 crc kubenswrapper[4869]: E0202 14:34:59.462544 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.471755 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:50:54.928275588 +0000 UTC Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.477999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.493520 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.508538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9b21ba6de36019c3d1607bb6c7e961c48eef36da6e55f9405e9df975d281ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3110bdb80b78d9cd9f082242133d17fad26d27f1f98d3d5d4505d6cf975064a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.526132 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49510a01-65b6-4a4a-a398-11a00b05a68d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0202 14:33:42.123383 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1770042805\\\\\\\\\\\\\\\" (2026-02-02 14:33:25 +0000 UTC to 2026-03-04 14:33:26 +0000 UTC (now=2026-02-02 14:33:42.123334862 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123407 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0202 14:33:42.123434 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0202 14:33:42.123534 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1770042816\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1770042816\\\\\\\\\\\\\\\" (2026-02-02 13:33:36 +0000 UTC to 2027-02-02 13:33:36 +0000 UTC (now=2026-02-02 14:33:42.123513367 +0000 UTC))\\\\\\\"\\\\nI0202 14:33:42.123562 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1084470218/tls.crt::/tmp/serving-cert-1084470218/tls.key\\\\\\\"\\\\nI0202 14:33:42.123538 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0202 14:33:42.123577 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0202 14:33:42.123562 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0202 14:33:42.123631 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0202 14:33:42.123628 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0202 14:33:42.126587 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.537555 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.538538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"260b503d-6953-457b-a958-728b5ccc47a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df6c6bbcaa04d04ed0921d91e16f806c4332c28a9746d66b9e325be28c814f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aca9b939605258f917ee75de98cec0e1f6bbefe8205f869140de0ca3a15118a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://937a00099cafae99e7785a715ad28af945e710bb84abfd7b2e830424d3b24b06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.550606 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d9vfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:31Z\\\",\\\"message\\\":\\\"2026-02-02T14:33:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4\\\\n2026-02-02T14:33:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0448a25c-89ad-4c17-9469-468a9cdc0fe4 to /host/opt/cni/bin/\\\\n2026-02-02T14:33:46Z [verbose] multus-daemon started\\\\n2026-02-02T14:33:46Z [verbose] Readiness Indicator file check\\\\n2026-02-02T14:34:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qr7b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d9vfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.558883 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-492m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"728209c5-b124-458f-b315-306433a62a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d309118883f2aba5bfbd6ce1b86732769243c6f75476c01c6be1ea94fde2843\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgx7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:46Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-492m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.573949 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7087ae0f-5f9b-4da3-8081-6417819b70e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41abe7b9a57ce7e4afbdf71dcf1b036c18adac85efd8d0cf27e7072bf7252b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f99804835bf8fb7095d0d3d29e3b175e9ddaabdf901104d3020ed2ba62e9b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfznq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4zdpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.591484 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37ae4835-4a7a-4f35-9a26-1b652269688f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57dbf7eafb53bffd2a0863b3d1677a65d782cafe67265bea4d1e8803a5547224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://554ab58cbf793e782c21583536d2fc9bc092ae81ce121bcb185521e526e0cdf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d970fc73d9516f6d1eb7b1e27f9202e0b7236c6efd95c18bc8478b3e50b1f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32e82a3c47da2576ab596a5cf57e45e6c1ae7f3279945b039297fc25ffbf44fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4088257c658a87ac1ae8eaf8b8b2f731f335d37e83598159143d2d4b19eaa14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed2514e57646db1c1751eab6be0b380ce34397f4a085b2790a70ed02fa03f0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://549b3a8726adb7c88b19622dcb13ce70cf596f48cdec96a8007fdb3d9ed2c36a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c8458b3096099a70f71ab06fe41a171697e49422c517ea38547bd2c12530a1c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.600750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65804f76-1783-4c7e-b1b2-c8b08c84615f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://798c064c352528e1cb858b56d46099dd05d6159b41279b5318a1b9541ee967f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9bb05219ca3eeb09adba9b4d18e48999ffbfbf92631814a9cc32c69e5e61eaf8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.614007 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8abf537d2a86faa602667316d4ef29abb8869974b0f8a070cc04c2e7a07063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.625687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a649255d-23ef-4070-9acc-2adb7d94bc21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60440f8c79010c12a870e9d8a4d70c83eb0917c0b4762b06c5ee2e42b8149d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wdcm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dql2j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.639996 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.641770 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2865336a-500d-43e5-a075-a9a8fa01b929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T14:34:18Z\\\",\\\"message\\\":\\\" initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:18Z is after 2025-08-24T17:21:41Z]\\\\nI0202 14:34:18.379753 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-d9vfd after 0 failed attempt(s)\\\\nI0202 14:34:18.379749 6587 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0202 14:34:18.379762 6587 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-d9vfd\\\\nI0202 14:34:18.379770 6587 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0202 14:34:18.379779 6587 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0202 14:34:18.379634 6587 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T14:34:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:34:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lzk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qmsw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.650405 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b597927-2943-4e1a-bac5-1266d539e8f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fp98\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qx2qt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.659726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e0ab3c8-71c5-446e-af13-8fb51eca4029\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:34:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a48293a7e09c1d626407beec7a9572388acd48f2f6aa0b9d96b194ff3d67cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20c27e2875a78e0946e4addf7684d1335d93f1cdaedbdf25261aca2cc5a9feab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bd987b1142e275d540df79a6a19d6de0fab58d1a2747ee921414cc2b3a7090b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36f193ef4302ea13f2058b25dea69944debd1ed9aed4d2688fd58c9061c9141f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.670764 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd91a63826dfbd15feecde00a38468f18651b8d076cedbaef7e38e399977552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.681657 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7tlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c17c822d-8d51-42d0-9cae-7b607f9af79a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcac4b67d60611404dbed0bda2ff0a2ae5a3397b120d64a9bbcea121efea1453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvkw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7tlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.696757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-862tl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34b37351-c7be-4d2b-9b3a-9b4752d9d2d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://919d215c53faa946401509698755c9bde0a3497c30c08895131386db22a8be47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T14:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0985b918220f8e3a5dfeb9e0a7bdbbef922b563fba3008812e83cd344c910cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1828c2488e25a3ba098d7976b393a8bdb2601fbaff182be04cf033d765e5db3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6e7aedbdde65f06484d6a7a5b1a6f40f2109e0afac00346faed36acf882a46f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b2ecf08848e23c615439838d87e4465fb11ae568298f8f897b9561d21eb590\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b3908cda82250e0e5ae4c335b8ad3970d7d2cb49db14ae01f10662b6c3aafd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99521117a0dad9d7e40789a5d0d5080c80ddd0c11be3b898dea3d80978f06b01\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T14:33:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T14:33:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcz5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T14:33:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-862tl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.708738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T14:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T14:34:59Z is after 2025-08-24T17:21:41Z" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.742751 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.845605 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:34:59 crc kubenswrapper[4869]: I0202 14:34:59.956325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:34:59Z","lastTransitionTime":"2026-02-02T14:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.059203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.162624 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.265831 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.368600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.462258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.462376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:00 crc kubenswrapper[4869]: E0202 14:35:00.462460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.462507 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:00 crc kubenswrapper[4869]: E0202 14:35:00.462600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:00 crc kubenswrapper[4869]: E0202 14:35:00.462699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.471872 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:43:32.179007309 +0000 UTC Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472830 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.472888 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.575380 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.678774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.678971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.679014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.679043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.679067 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.782510 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.885756 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:00 crc kubenswrapper[4869]: I0202 14:35:00.988169 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:00Z","lastTransitionTime":"2026-02-02T14:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.090474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.192506 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.295211 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.397795 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.462248 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:01 crc kubenswrapper[4869]: E0202 14:35:01.462709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.472491 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:31:04.100348493 +0000 UTC Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.500261 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.603206 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.707825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.707973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.707984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.708022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.708041 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.811398 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:01 crc kubenswrapper[4869]: I0202 14:35:01.914842 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:01Z","lastTransitionTime":"2026-02-02T14:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.017982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.018068 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.120996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.121011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.223831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224309 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.224369 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.264803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.265517 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.265773 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs podName:0b597927-2943-4e1a-bac5-1266d539e8f8 nodeName:}" failed. No retries permitted until 2026-02-02 14:36:06.265739627 +0000 UTC m=+167.910376437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs") pod "network-metrics-daemon-qx2qt" (UID: "0b597927-2943-4e1a-bac5-1266d539e8f8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.326237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.429348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.462664 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.462807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.462691 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.462898 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.462999 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:02 crc kubenswrapper[4869]: E0202 14:35:02.463076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.472905 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:45:07.448446532 +0000 UTC Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.531718 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.634664 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.737957 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.841563 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.915362 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" probeResult="failure" output="" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:02 crc kubenswrapper[4869]: I0202 14:35:02.944928 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:02Z","lastTransitionTime":"2026-02-02T14:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.048190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.151339 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.254838 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.357830 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.460652 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.462391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:03 crc kubenswrapper[4869]: E0202 14:35:03.462531 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.473532 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:30:14.781165447 +0000 UTC Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.562997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.563070 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.666600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.769742 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.873639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:03 crc kubenswrapper[4869]: I0202 14:35:03.976827 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:03Z","lastTransitionTime":"2026-02-02T14:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.079793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.183220 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.287357 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.390547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.461771 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.461834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:04 crc kubenswrapper[4869]: E0202 14:35:04.462070 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:04 crc kubenswrapper[4869]: E0202 14:35:04.462202 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.462336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:04 crc kubenswrapper[4869]: E0202 14:35:04.462479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.473673 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:41:54.971958613 +0000 UTC Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.493643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.582706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T14:35:04Z","lastTransitionTime":"2026-02-02T14:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.652108 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg"] Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.652732 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655244 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655410 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.655640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.715696 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.71566982 podStartE2EDuration="1m17.71566982s" podCreationTimestamp="2026-02-02 14:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.715223629 +0000 UTC m=+106.359860419" watchObservedRunningTime="2026-02-02 14:35:04.71566982 +0000 UTC m=+106.360306600" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.716049 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.71604088 podStartE2EDuration="19.71604088s" podCreationTimestamp="2026-02-02 14:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.695429573 +0000 UTC m=+106.340066343" watchObservedRunningTime="2026-02-02 14:35:04.71604088 +0000 UTC m=+106.360677670" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.736183 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-d9vfd" podStartSLOduration=82.736159914 podStartE2EDuration="1m22.736159914s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.733293191 +0000 UTC m=+106.377929961" watchObservedRunningTime="2026-02-02 14:35:04.736159914 +0000 UTC m=+106.380796704" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.768377 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4zdpx" podStartSLOduration=81.768317806 podStartE2EDuration="1m21.768317806s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.76615863 +0000 UTC m=+106.410795400" watchObservedRunningTime="2026-02-02 14:35:04.768317806 +0000 UTC m=+106.412954606" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.768592 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-492m9" podStartSLOduration=82.768584793 podStartE2EDuration="1m22.768584793s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.749290819 +0000 UTC m=+106.393927599" watchObservedRunningTime="2026-02-02 14:35:04.768584793 +0000 UTC m=+106.413221603" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35773d6f-75dc-4f55-b843-7153b80a9ce9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795765 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795802 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35773d6f-75dc-4f55-b843-7153b80a9ce9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.795938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35773d6f-75dc-4f55-b843-7153b80a9ce9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.825924 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.825874627 podStartE2EDuration="57.825874627s" podCreationTimestamp="2026-02-02 14:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.824122372 +0000 UTC m=+106.468759142" watchObservedRunningTime="2026-02-02 14:35:04.825874627 +0000 UTC m=+106.470511397" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.864440 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.864412662 podStartE2EDuration="25.864412662s" podCreationTimestamp="2026-02-02 14:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.842226765 +0000 UTC m=+106.486863545" watchObservedRunningTime="2026-02-02 14:35:04.864412662 +0000 UTC m=+106.509049432" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.879966 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podStartSLOduration=82.879948479 podStartE2EDuration="1m22.879948479s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.879562919 +0000 UTC m=+106.524199689" watchObservedRunningTime="2026-02-02 14:35:04.879948479 +0000 UTC m=+106.524585249" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897118 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35773d6f-75dc-4f55-b843-7153b80a9ce9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35773d6f-75dc-4f55-b843-7153b80a9ce9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35773d6f-75dc-4f55-b843-7153b80a9ce9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.897455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35773d6f-75dc-4f55-b843-7153b80a9ce9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.898562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35773d6f-75dc-4f55-b843-7153b80a9ce9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.911623 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podStartSLOduration=82.911604538 podStartE2EDuration="1m22.911604538s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.910936181 +0000 UTC m=+106.555572971" watchObservedRunningTime="2026-02-02 14:35:04.911604538 +0000 UTC m=+106.556241308" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.914006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35773d6f-75dc-4f55-b843-7153b80a9ce9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.926997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35773d6f-75dc-4f55-b843-7153b80a9ce9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-r68qg\" (UID: \"35773d6f-75dc-4f55-b843-7153b80a9ce9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.969048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" Feb 02 14:35:04 crc kubenswrapper[4869]: I0202 14:35:04.974959 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7tlsl" podStartSLOduration=82.974938836 podStartE2EDuration="1m22.974938836s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:04.974562547 +0000 UTC m=+106.619199327" watchObservedRunningTime="2026-02-02 14:35:04.974938836 +0000 UTC m=+106.619575606" Feb 02 14:35:04 crc kubenswrapper[4869]: W0202 14:35:04.983406 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35773d6f_75dc_4f55_b843_7153b80a9ce9.slice/crio-8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9 WatchSource:0}: Error finding container 8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9: Status 404 returned error can't find the container with id 8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9 Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.023419 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-862tl" podStartSLOduration=83.023401185 podStartE2EDuration="1m23.023401185s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:05.005933319 +0000 UTC m=+106.650570109" watchObservedRunningTime="2026-02-02 14:35:05.023401185 +0000 UTC m=+106.668037955" Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.040561 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=83.040532533 podStartE2EDuration="1m23.040532533s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:05.023489837 +0000 UTC m=+106.668126597" watchObservedRunningTime="2026-02-02 14:35:05.040532533 +0000 UTC m=+106.685169303" Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.462777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:05 crc kubenswrapper[4869]: E0202 14:35:05.463025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.473888 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:08:58.527436964 +0000 UTC Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.474012 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.482808 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.490389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" event={"ID":"35773d6f-75dc-4f55-b843-7153b80a9ce9","Type":"ContainerStarted","Data":"76d5a5f96044e67002795d68db9e260745dea48860dbf17e6ad7116fdc2c0027"} Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.490433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" event={"ID":"35773d6f-75dc-4f55-b843-7153b80a9ce9","Type":"ContainerStarted","Data":"8e284cd0acbd0620166acfd6e9729308b21210d2214cea3bb3f4ad7c37a73ef9"} Feb 02 14:35:05 crc kubenswrapper[4869]: I0202 14:35:05.506294 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-r68qg" podStartSLOduration=83.506270656 podStartE2EDuration="1m23.506270656s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:35:05.505990229 +0000 UTC m=+107.150627039" watchObservedRunningTime="2026-02-02 14:35:05.506270656 +0000 UTC m=+107.150907416" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.462535 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.462575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.462621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.462677 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.462779 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.462942 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.496280 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.497074 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/2.log" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.500659 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" exitCode=1 Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.500717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0"} Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.500772 4869 scope.go:117] "RemoveContainer" containerID="1b60ae2dce4946acdaa40c0f9e96349072fea893c155232a84507a2e72bdff46" Feb 02 14:35:06 crc kubenswrapper[4869]: I0202 14:35:06.501773 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:06 crc kubenswrapper[4869]: E0202 14:35:06.501998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:07 crc kubenswrapper[4869]: I0202 14:35:07.462261 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:07 crc kubenswrapper[4869]: E0202 14:35:07.462400 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:07 crc kubenswrapper[4869]: I0202 14:35:07.506011 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:35:08 crc kubenswrapper[4869]: I0202 14:35:08.462205 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:08 crc kubenswrapper[4869]: I0202 14:35:08.462288 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:08 crc kubenswrapper[4869]: E0202 14:35:08.462327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:08 crc kubenswrapper[4869]: I0202 14:35:08.462205 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:08 crc kubenswrapper[4869]: E0202 14:35:08.462424 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:08 crc kubenswrapper[4869]: E0202 14:35:08.462581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:09 crc kubenswrapper[4869]: I0202 14:35:09.461823 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:09 crc kubenswrapper[4869]: E0202 14:35:09.465642 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:10 crc kubenswrapper[4869]: I0202 14:35:10.462504 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:10 crc kubenswrapper[4869]: E0202 14:35:10.462683 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:10 crc kubenswrapper[4869]: I0202 14:35:10.462758 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:10 crc kubenswrapper[4869]: I0202 14:35:10.462764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:10 crc kubenswrapper[4869]: E0202 14:35:10.462846 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:10 crc kubenswrapper[4869]: E0202 14:35:10.463044 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:11 crc kubenswrapper[4869]: I0202 14:35:11.462103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:11 crc kubenswrapper[4869]: E0202 14:35:11.462270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:12 crc kubenswrapper[4869]: I0202 14:35:12.461960 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:12 crc kubenswrapper[4869]: I0202 14:35:12.462103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:12 crc kubenswrapper[4869]: I0202 14:35:12.462112 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:12 crc kubenswrapper[4869]: E0202 14:35:12.462738 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:12 crc kubenswrapper[4869]: E0202 14:35:12.462903 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:12 crc kubenswrapper[4869]: E0202 14:35:12.463149 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:13 crc kubenswrapper[4869]: I0202 14:35:13.463376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:13 crc kubenswrapper[4869]: E0202 14:35:13.463623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:14 crc kubenswrapper[4869]: I0202 14:35:14.462244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:14 crc kubenswrapper[4869]: E0202 14:35:14.462456 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:14 crc kubenswrapper[4869]: I0202 14:35:14.462696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:14 crc kubenswrapper[4869]: I0202 14:35:14.462739 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:14 crc kubenswrapper[4869]: E0202 14:35:14.462818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:14 crc kubenswrapper[4869]: E0202 14:35:14.463049 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:15 crc kubenswrapper[4869]: I0202 14:35:15.462789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:15 crc kubenswrapper[4869]: E0202 14:35:15.463090 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:16 crc kubenswrapper[4869]: I0202 14:35:16.461817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:16 crc kubenswrapper[4869]: I0202 14:35:16.461878 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:16 crc kubenswrapper[4869]: I0202 14:35:16.461944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:16 crc kubenswrapper[4869]: E0202 14:35:16.461997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:16 crc kubenswrapper[4869]: E0202 14:35:16.462129 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:16 crc kubenswrapper[4869]: E0202 14:35:16.462295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:17 crc kubenswrapper[4869]: I0202 14:35:17.463044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:17 crc kubenswrapper[4869]: I0202 14:35:17.463896 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:17 crc kubenswrapper[4869]: E0202 14:35:17.464147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:17 crc kubenswrapper[4869]: E0202 14:35:17.464713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.462418 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.462452 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.462583 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.462737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.462862 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.463074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.545209 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/0.log" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546526 4869 generic.go:334] "Generic (PLEG): container finished" podID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" exitCode=1 Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerDied","Data":"e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a"} Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.546636 4869 scope.go:117] "RemoveContainer" containerID="b3728c748f911da66c28f7646cacb7cc271673c5636038046019b26e1acb00d9" Feb 02 14:35:18 crc kubenswrapper[4869]: I0202 14:35:18.547070 4869 scope.go:117] "RemoveContainer" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" Feb 02 14:35:18 crc kubenswrapper[4869]: E0202 14:35:18.547266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-d9vfd_openshift-multus(45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0)\"" pod="openshift-multus/multus-d9vfd" podUID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" Feb 02 14:35:19 crc kubenswrapper[4869]: I0202 14:35:19.462233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:19 crc kubenswrapper[4869]: E0202 14:35:19.463440 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:19 crc kubenswrapper[4869]: E0202 14:35:19.470986 4869 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 14:35:19 crc kubenswrapper[4869]: I0202 14:35:19.554356 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:35:19 crc kubenswrapper[4869]: E0202 14:35:19.570488 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:20 crc kubenswrapper[4869]: I0202 14:35:20.461879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:20 crc kubenswrapper[4869]: I0202 14:35:20.462033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:20 crc kubenswrapper[4869]: E0202 14:35:20.462175 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:20 crc kubenswrapper[4869]: I0202 14:35:20.462271 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:20 crc kubenswrapper[4869]: E0202 14:35:20.462334 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:20 crc kubenswrapper[4869]: E0202 14:35:20.462454 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:21 crc kubenswrapper[4869]: I0202 14:35:21.462159 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:21 crc kubenswrapper[4869]: E0202 14:35:21.463146 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:22 crc kubenswrapper[4869]: I0202 14:35:22.462488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:22 crc kubenswrapper[4869]: I0202 14:35:22.462609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:22 crc kubenswrapper[4869]: E0202 14:35:22.462733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:22 crc kubenswrapper[4869]: I0202 14:35:22.462849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:22 crc kubenswrapper[4869]: E0202 14:35:22.462992 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:22 crc kubenswrapper[4869]: E0202 14:35:22.463143 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:23 crc kubenswrapper[4869]: I0202 14:35:23.462298 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:23 crc kubenswrapper[4869]: E0202 14:35:23.462464 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:24 crc kubenswrapper[4869]: I0202 14:35:24.462645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:24 crc kubenswrapper[4869]: I0202 14:35:24.462645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:24 crc kubenswrapper[4869]: I0202 14:35:24.462634 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.462813 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.463025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.463162 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:24 crc kubenswrapper[4869]: E0202 14:35:24.572284 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:25 crc kubenswrapper[4869]: I0202 14:35:25.462309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:25 crc kubenswrapper[4869]: E0202 14:35:25.462510 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:26 crc kubenswrapper[4869]: I0202 14:35:26.462068 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:26 crc kubenswrapper[4869]: E0202 14:35:26.462290 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:26 crc kubenswrapper[4869]: I0202 14:35:26.462100 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:26 crc kubenswrapper[4869]: I0202 14:35:26.462076 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:26 crc kubenswrapper[4869]: E0202 14:35:26.462429 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:26 crc kubenswrapper[4869]: E0202 14:35:26.462797 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:27 crc kubenswrapper[4869]: I0202 14:35:27.462702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:27 crc kubenswrapper[4869]: E0202 14:35:27.463085 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.462066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.462147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.462266 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.462307 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.462652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.462738 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:28 crc kubenswrapper[4869]: I0202 14:35:28.463391 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:28 crc kubenswrapper[4869]: E0202 14:35:28.463612 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:29 crc kubenswrapper[4869]: I0202 14:35:29.462346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:29 crc kubenswrapper[4869]: E0202 14:35:29.463845 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:29 crc kubenswrapper[4869]: E0202 14:35:29.573010 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:30 crc kubenswrapper[4869]: I0202 14:35:30.462458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:30 crc kubenswrapper[4869]: I0202 14:35:30.462569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:30 crc kubenswrapper[4869]: E0202 14:35:30.462624 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:30 crc kubenswrapper[4869]: E0202 14:35:30.462754 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:30 crc kubenswrapper[4869]: I0202 14:35:30.462569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:30 crc kubenswrapper[4869]: E0202 14:35:30.462872 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:31 crc kubenswrapper[4869]: I0202 14:35:31.462147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:31 crc kubenswrapper[4869]: E0202 14:35:31.462326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:31 crc kubenswrapper[4869]: I0202 14:35:31.462618 4869 scope.go:117] "RemoveContainer" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.461746 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:32 crc kubenswrapper[4869]: E0202 14:35:32.462453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.461990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:32 crc kubenswrapper[4869]: E0202 14:35:32.462551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.461931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:32 crc kubenswrapper[4869]: E0202 14:35:32.462725 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.602137 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:35:32 crc kubenswrapper[4869]: I0202 14:35:32.602217 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9"} Feb 02 14:35:33 crc kubenswrapper[4869]: I0202 14:35:33.461836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:33 crc kubenswrapper[4869]: E0202 14:35:33.462045 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:34 crc kubenswrapper[4869]: I0202 14:35:34.462527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:34 crc kubenswrapper[4869]: I0202 14:35:34.462652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:34 crc kubenswrapper[4869]: I0202 14:35:34.462713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.462737 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.462833 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.462943 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:34 crc kubenswrapper[4869]: E0202 14:35:34.575230 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:35 crc kubenswrapper[4869]: I0202 14:35:35.462012 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:35 crc kubenswrapper[4869]: E0202 14:35:35.462219 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:36 crc kubenswrapper[4869]: I0202 14:35:36.462498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:36 crc kubenswrapper[4869]: I0202 14:35:36.462693 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:36 crc kubenswrapper[4869]: I0202 14:35:36.462803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:36 crc kubenswrapper[4869]: E0202 14:35:36.462732 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:36 crc kubenswrapper[4869]: E0202 14:35:36.463022 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:36 crc kubenswrapper[4869]: E0202 14:35:36.463181 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:37 crc kubenswrapper[4869]: I0202 14:35:37.461824 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:37 crc kubenswrapper[4869]: E0202 14:35:37.462074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:38 crc kubenswrapper[4869]: I0202 14:35:38.461899 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:38 crc kubenswrapper[4869]: I0202 14:35:38.462065 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:38 crc kubenswrapper[4869]: I0202 14:35:38.462146 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:38 crc kubenswrapper[4869]: E0202 14:35:38.462155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:38 crc kubenswrapper[4869]: E0202 14:35:38.462281 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:38 crc kubenswrapper[4869]: E0202 14:35:38.462557 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:39 crc kubenswrapper[4869]: I0202 14:35:39.462151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:39 crc kubenswrapper[4869]: E0202 14:35:39.463674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:39 crc kubenswrapper[4869]: E0202 14:35:39.575901 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:40 crc kubenswrapper[4869]: I0202 14:35:40.462726 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:40 crc kubenswrapper[4869]: I0202 14:35:40.462858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:40 crc kubenswrapper[4869]: I0202 14:35:40.462980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:40 crc kubenswrapper[4869]: E0202 14:35:40.463038 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:40 crc kubenswrapper[4869]: E0202 14:35:40.463159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:40 crc kubenswrapper[4869]: E0202 14:35:40.463448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:41 crc kubenswrapper[4869]: I0202 14:35:41.462449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:41 crc kubenswrapper[4869]: E0202 14:35:41.462621 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:42 crc kubenswrapper[4869]: I0202 14:35:42.462633 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:42 crc kubenswrapper[4869]: I0202 14:35:42.462657 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:42 crc kubenswrapper[4869]: E0202 14:35:42.463627 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:42 crc kubenswrapper[4869]: I0202 14:35:42.462688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:42 crc kubenswrapper[4869]: E0202 14:35:42.463720 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:42 crc kubenswrapper[4869]: E0202 14:35:42.463852 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:43 crc kubenswrapper[4869]: I0202 14:35:43.462672 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:43 crc kubenswrapper[4869]: E0202 14:35:43.463179 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:43 crc kubenswrapper[4869]: I0202 14:35:43.464020 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:43 crc kubenswrapper[4869]: E0202 14:35:43.464194 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qmsw6_openshift-ovn-kubernetes(2865336a-500d-43e5-a075-a9a8fa01b929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" Feb 02 14:35:44 crc kubenswrapper[4869]: I0202 14:35:44.461725 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:44 crc kubenswrapper[4869]: I0202 14:35:44.461779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:44 crc kubenswrapper[4869]: I0202 14:35:44.461941 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.461984 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.462049 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.462134 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:44 crc kubenswrapper[4869]: E0202 14:35:44.577213 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:45 crc kubenswrapper[4869]: I0202 14:35:45.304439 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:35:45 crc kubenswrapper[4869]: I0202 14:35:45.304532 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:35:45 crc kubenswrapper[4869]: I0202 14:35:45.462501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:45 crc kubenswrapper[4869]: E0202 14:35:45.463034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:46 crc kubenswrapper[4869]: I0202 14:35:46.461675 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:46 crc kubenswrapper[4869]: I0202 14:35:46.461736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:46 crc kubenswrapper[4869]: I0202 14:35:46.461772 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:46 crc kubenswrapper[4869]: E0202 14:35:46.461845 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:46 crc kubenswrapper[4869]: E0202 14:35:46.462054 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:46 crc kubenswrapper[4869]: E0202 14:35:46.462185 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:47 crc kubenswrapper[4869]: I0202 14:35:47.462753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:47 crc kubenswrapper[4869]: E0202 14:35:47.463970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:48 crc kubenswrapper[4869]: I0202 14:35:48.462453 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:48 crc kubenswrapper[4869]: I0202 14:35:48.462617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:48 crc kubenswrapper[4869]: E0202 14:35:48.462674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:48 crc kubenswrapper[4869]: E0202 14:35:48.462818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:48 crc kubenswrapper[4869]: I0202 14:35:48.462447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:48 crc kubenswrapper[4869]: E0202 14:35:48.462960 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:49 crc kubenswrapper[4869]: I0202 14:35:49.462348 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:49 crc kubenswrapper[4869]: E0202 14:35:49.463699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:49 crc kubenswrapper[4869]: E0202 14:35:49.577845 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.462102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.462236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.462309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.462441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.462575 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.462735 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553532 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.553484597 +0000 UTC m=+274.198121377 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553585 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553597 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553653 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.55363403 +0000 UTC m=+274.198270810 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553691 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:50 crc kubenswrapper[4869]: I0202 14:35:50.553719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553854 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553900 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553966 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.553954529 +0000 UTC m=+274.198591309 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553983 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.553984 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554007 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554034 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554060 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554103 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.554076432 +0000 UTC m=+274.198713242 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:50 crc kubenswrapper[4869]: E0202 14:35:50.554168 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:37:52.554137823 +0000 UTC m=+274.198774643 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 14:35:51 crc kubenswrapper[4869]: I0202 14:35:51.461863 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:51 crc kubenswrapper[4869]: E0202 14:35:51.462092 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:52 crc kubenswrapper[4869]: I0202 14:35:52.462531 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:52 crc kubenswrapper[4869]: I0202 14:35:52.462627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:52 crc kubenswrapper[4869]: E0202 14:35:52.462751 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:52 crc kubenswrapper[4869]: I0202 14:35:52.462819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:52 crc kubenswrapper[4869]: E0202 14:35:52.463021 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:52 crc kubenswrapper[4869]: E0202 14:35:52.463270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:53 crc kubenswrapper[4869]: I0202 14:35:53.462159 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:53 crc kubenswrapper[4869]: E0202 14:35:53.462340 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:54 crc kubenswrapper[4869]: I0202 14:35:54.461777 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:54 crc kubenswrapper[4869]: I0202 14:35:54.461834 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:54 crc kubenswrapper[4869]: I0202 14:35:54.461790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.462048 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.462146 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.462276 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:54 crc kubenswrapper[4869]: E0202 14:35:54.579941 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:35:55 crc kubenswrapper[4869]: I0202 14:35:55.462508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:55 crc kubenswrapper[4869]: E0202 14:35:55.462713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:56 crc kubenswrapper[4869]: I0202 14:35:56.461875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:56 crc kubenswrapper[4869]: I0202 14:35:56.462017 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:56 crc kubenswrapper[4869]: E0202 14:35:56.462120 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:56 crc kubenswrapper[4869]: I0202 14:35:56.462138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:56 crc kubenswrapper[4869]: E0202 14:35:56.462288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:56 crc kubenswrapper[4869]: E0202 14:35:56.462513 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:57 crc kubenswrapper[4869]: I0202 14:35:57.462258 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:57 crc kubenswrapper[4869]: E0202 14:35:57.462489 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.462332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.462500 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:35:58 crc kubenswrapper[4869]: E0202 14:35:58.462586 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:35:58 crc kubenswrapper[4869]: E0202 14:35:58.462683 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.463051 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:58 crc kubenswrapper[4869]: E0202 14:35:58.463147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.463511 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.700603 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.705247 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerStarted","Data":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} Feb 02 14:35:58 crc kubenswrapper[4869]: I0202 14:35:58.706783 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:35:59 crc kubenswrapper[4869]: I0202 14:35:59.462299 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:35:59 crc kubenswrapper[4869]: E0202 14:35:59.462880 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:35:59 crc kubenswrapper[4869]: I0202 14:35:59.546159 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qx2qt"] Feb 02 14:35:59 crc kubenswrapper[4869]: I0202 14:35:59.546372 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:35:59 crc kubenswrapper[4869]: E0202 14:35:59.546522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:35:59 crc kubenswrapper[4869]: E0202 14:35:59.580452 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:36:00 crc kubenswrapper[4869]: I0202 14:36:00.462354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:00 crc kubenswrapper[4869]: I0202 14:36:00.462540 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:00 crc kubenswrapper[4869]: E0202 14:36:00.462623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:36:00 crc kubenswrapper[4869]: E0202 14:36:00.462698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:36:01 crc kubenswrapper[4869]: I0202 14:36:01.461799 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:01 crc kubenswrapper[4869]: I0202 14:36:01.461933 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:36:01 crc kubenswrapper[4869]: E0202 14:36:01.462076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:36:01 crc kubenswrapper[4869]: E0202 14:36:01.462144 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:36:02 crc kubenswrapper[4869]: I0202 14:36:02.462754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:02 crc kubenswrapper[4869]: I0202 14:36:02.462869 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:02 crc kubenswrapper[4869]: E0202 14:36:02.463130 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:36:02 crc kubenswrapper[4869]: E0202 14:36:02.463341 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:36:03 crc kubenswrapper[4869]: I0202 14:36:03.462444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:03 crc kubenswrapper[4869]: E0202 14:36:03.462753 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qx2qt" podUID="0b597927-2943-4e1a-bac5-1266d539e8f8" Feb 02 14:36:03 crc kubenswrapper[4869]: I0202 14:36:03.462882 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:36:03 crc kubenswrapper[4869]: E0202 14:36:03.463249 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:36:04 crc kubenswrapper[4869]: I0202 14:36:04.462347 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:04 crc kubenswrapper[4869]: I0202 14:36:04.462356 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:04 crc kubenswrapper[4869]: E0202 14:36:04.462578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:36:04 crc kubenswrapper[4869]: E0202 14:36:04.462680 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.462264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.462272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.465458 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.466389 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.467060 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.469463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.498841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.544016 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.544578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.550591 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.550887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.550957 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.551007 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.551071 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.551354 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.562688 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pm4x8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.563510 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.563946 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.564206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.565362 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.566140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.567971 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.569075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.569198 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4hhbx"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.570603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.571901 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.572506 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.575564 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.583134 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.583520 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.583757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.584018 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x5lbr"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.584616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.605139 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zqdwm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.606091 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.606423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.606928 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.607163 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.607648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.607962 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608466 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608663 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608833 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.608997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609029 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609257 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609403 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609420 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609452 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609570 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609735 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609773 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609408 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609885 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609890 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609998 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610010 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.609780 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610110 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610153 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610211 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610299 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610337 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610379 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610523 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610559 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610450 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610661 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610736 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610757 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610873 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610684 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.610965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611011 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611043 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611076 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.611228 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612269 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612436 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612622 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.612954 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.613485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.614180 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.615147 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxvvv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.615976 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.616162 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.616820 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.617110 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.617357 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.619081 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.635143 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.637576 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.637993 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whptb"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.638924 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.639307 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.639495 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.639976 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.640421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.640978 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641489 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641517 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641583 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641675 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.641883 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.642078 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.642582 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643543 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643612 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae3c559-c92e-45a1-8e66-383dee4460cd-serving-cert\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57glr\" (UniqueName: \"kubernetes.io/projected/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-kube-api-access-57glr\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w927m\" (UniqueName: \"kubernetes.io/projected/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-kube-api-access-w927m\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.643798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9922f280-ff61-424a-a336-769c0cfb5da2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.644566 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.644608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.644638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.649132 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.653106 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.653377 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.653608 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.655298 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m44c2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.655848 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.682386 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.682650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.683099 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.684143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.684794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.685499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-audit\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686143 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-policies\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-audit-dir\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-encryption-config\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686547 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-client\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-image-import-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686670 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-machine-approver-tls\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686727 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9922f280-ff61-424a-a336-769c0cfb5da2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686819 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.686872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-serving-cert\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngwl\" (UniqueName: \"kubernetes.io/projected/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-kube-api-access-pngwl\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-config\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf59w\" (UniqueName: \"kubernetes.io/projected/9922f280-ff61-424a-a336-769c0cfb5da2-kube-api-access-rf59w\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687266 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-auth-proxy-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-service-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687557 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-serving-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fb104b8-53b8-45dd-8406-206d6ba5a250-metrics-tls\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-encryption-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687693 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687744 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-serving-cert\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksd68\" (UniqueName: \"kubernetes.io/projected/dae3c559-c92e-45a1-8e66-383dee4460cd-kube-api-access-ksd68\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687937 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.687974 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-node-pullsecrets\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-dir\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688151 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-client\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/0fb104b8-53b8-45dd-8406-206d6ba5a250-kube-api-access-797zm\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688346 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svkg\" (UniqueName: \"kubernetes.io/projected/78130644-70b6-4285-9ca7-e5a671bd1111-kube-api-access-4svkg\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688455 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688499 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.688769 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.693158 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.693553 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.694567 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.709957 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.710393 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.711137 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.715232 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.717654 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.717835 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.718007 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.719158 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.719412 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.719942 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.720545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.722986 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.724936 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.725479 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.725780 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9cvf"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.726303 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.726663 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727105 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.727828 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.728104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.728410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730357 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730572 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730708 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.730843 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732489 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732517 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732656 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.732766 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733110 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733223 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733462 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.733545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734189 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734700 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734786 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.734848 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.735107 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.735281 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.735442 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.736184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.736415 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.736855 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.738584 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.738780 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.739573 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.739749 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.740357 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.740670 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.740851 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.742409 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.742731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.745603 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8vv5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.745745 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.746369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.747016 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.747410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.749360 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.749799 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.782687 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.783362 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.784721 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pm4x8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.786393 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.788501 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790153 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790236 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae3c559-c92e-45a1-8e66-383dee4460cd-serving-cert\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w927m\" (UniqueName: \"kubernetes.io/projected/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-kube-api-access-w927m\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57glr\" (UniqueName: \"kubernetes.io/projected/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-kube-api-access-57glr\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790457 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9922f280-ff61-424a-a336-769c0cfb5da2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790483 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-audit\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-policies\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-client\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-image-import-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-audit-dir\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-encryption-config\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-machine-approver-tls\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9922f280-ff61-424a-a336-769c0cfb5da2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.790986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-serving-cert\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791046 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngwl\" (UniqueName: \"kubernetes.io/projected/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-kube-api-access-pngwl\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-config\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791086 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791105 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf59w\" (UniqueName: \"kubernetes.io/projected/9922f280-ff61-424a-a336-769c0cfb5da2-kube-api-access-rf59w\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-service-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.791869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-service-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.792474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.792891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-auth-proxy-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-serving-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fb104b8-53b8-45dd-8406-206d6ba5a250-metrics-tls\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-encryption-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793659 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-auth-proxy-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793810 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.793873 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.794252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.795029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-serving-cert\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.795081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.795711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-config\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.796619 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-serving-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.796704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksd68\" (UniqueName: \"kubernetes.io/projected/dae3c559-c92e-45a1-8e66-383dee4460cd-kube-api-access-ksd68\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.797468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-node-pullsecrets\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.798793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-dir\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.798830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-client\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.798856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.799085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-dir\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.799356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.799395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.800431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.800997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4svkg\" (UniqueName: \"kubernetes.io/projected/78130644-70b6-4285-9ca7-e5a671bd1111-kube-api-access-4svkg\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.801037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/0fb104b8-53b8-45dd-8406-206d6ba5a250-kube-api-access-797zm\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.801502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fb104b8-53b8-45dd-8406-206d6ba5a250-metrics-tls\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.802307 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.802795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.803093 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-client\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.803178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.803125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dae3c559-c92e-45a1-8e66-383dee4460cd-serving-cert\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805304 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-snfqj"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805649 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805768 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-audit\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.805968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.806926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-audit-policies\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.808074 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.808463 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.808560 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-node-pullsecrets\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809207 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9922f280-ff61-424a-a336-769c0cfb5da2-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dae3c559-c92e-45a1-8e66-383dee4460cd-config\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.809976 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9922f280-ff61-424a-a336-769c0cfb5da2-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.810018 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.810147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.811305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.811408 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812137 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-serving-cert\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-encryption-config\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812549 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.812767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78130644-70b6-4285-9ca7-e5a671bd1111-audit-dir\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813204 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-encryption-config\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813393 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.813567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/78130644-70b6-4285-9ca7-e5a671bd1111-image-import-ca\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-etcd-client\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78130644-70b6-4285-9ca7-e5a671bd1111-serving-cert\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.814976 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mcwnk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.815284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.816161 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.816538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.816951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.817007 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.818995 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxvvv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.819474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-machine-approver-tls\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.819650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.821450 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.824292 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x5lbr"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.824332 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.825813 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zqdwm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.827350 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.827462 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.828672 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.829716 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.830886 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.831891 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.833345 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z4jh5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.834515 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.834651 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.835436 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.836837 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.837376 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.839193 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.839365 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.841311 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.843214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.845600 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9cvf"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.846236 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.847447 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m44c2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.848711 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.850145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4hhbx"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.851592 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whptb"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.853000 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.854298 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.854828 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.857579 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.860015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.860558 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4jh5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.862784 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mcwnk"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.865288 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8vv5"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.866612 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.867102 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.868751 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.871356 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.873046 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdq4v"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.875230 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdq4v"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.875247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.876922 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-245rt"] Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.877435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.886631 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.918851 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.926518 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.947027 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.967386 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 14:36:05 crc kubenswrapper[4869]: I0202 14:36:05.987612 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.007291 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.027834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.047887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.067604 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.087103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.107886 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.127307 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.148034 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.168494 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.187058 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.208216 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.228267 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.249106 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.267225 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.287490 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.306683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.308124 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.310255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b597927-2943-4e1a-bac5-1266d539e8f8-metrics-certs\") pod \"network-metrics-daemon-qx2qt\" (UID: \"0b597927-2943-4e1a-bac5-1266d539e8f8\") " pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.327301 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.348089 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.368002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.403711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qx2qt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.417440 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.427571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.449850 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.462050 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.462043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.468435 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.488045 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.507887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.528062 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.548350 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.567826 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.588065 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.608346 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.628171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.637628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qx2qt"] Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.648516 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.667403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.689133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.708529 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.726823 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.738645 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" event={"ID":"0b597927-2943-4e1a-bac5-1266d539e8f8","Type":"ContainerStarted","Data":"7dc6b95db8ef40ca28ca26cbe5cd5e850dbec7e4b3d376ce0c91dcc6c8cb82b0"} Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.746160 4869 request.go:700] Waited for 1.017402838s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.748361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.767766 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.787065 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.808563 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.828955 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.847105 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.887725 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.908542 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.928341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.948305 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.967838 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 14:36:06 crc kubenswrapper[4869]: I0202 14:36:06.986967 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.008050 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.027181 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.047826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.068404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.088319 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.114879 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.128033 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.147146 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.166732 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.188229 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.208188 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.253584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"controller-manager-879f6c89f-2zsv9\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.267610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"oauth-openshift-558db77b4-snmjm\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.288196 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.289819 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksd68\" (UniqueName: \"kubernetes.io/projected/dae3c559-c92e-45a1-8e66-383dee4460cd-kube-api-access-ksd68\") pod \"authentication-operator-69f744f599-pm4x8\" (UID: \"dae3c559-c92e-45a1-8e66-383dee4460cd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.321395 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"console-f9d7485db-ptmkd\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.337007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57glr\" (UniqueName: \"kubernetes.io/projected/aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804-kube-api-access-57glr\") pod \"apiserver-7bbb656c7d-t8c67\" (UID: \"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.363194 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.364219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w927m\" (UniqueName: \"kubernetes.io/projected/0bef80e9-27d1-43c4-9a1f-4a86b2effe23-kube-api-access-w927m\") pod \"machine-approver-56656f9798-gv86n\" (UID: \"0bef80e9-27d1-43c4-9a1f-4a86b2effe23\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.371881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797zm\" (UniqueName: \"kubernetes.io/projected/0fb104b8-53b8-45dd-8406-206d6ba5a250-kube-api-access-797zm\") pod \"dns-operator-744455d44c-x5lbr\" (UID: \"0fb104b8-53b8-45dd-8406-206d6ba5a250\") " pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.380980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.381705 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.387651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4svkg\" (UniqueName: \"kubernetes.io/projected/78130644-70b6-4285-9ca7-e5a671bd1111-kube-api-access-4svkg\") pod \"apiserver-76f77b778f-4hhbx\" (UID: \"78130644-70b6-4285-9ca7-e5a671bd1111\") " pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.406449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngwl\" (UniqueName: \"kubernetes.io/projected/1b6ec461-dbfb-4c98-9e2b-0946363a2f1f-kube-api-access-pngwl\") pod \"cluster-samples-operator-665b6dd947-ttkq6\" (UID: \"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.407850 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: W0202 14:36:07.411075 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bef80e9_27d1_43c4_9a1f_4a86b2effe23.slice/crio-76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6 WatchSource:0}: Error finding container 76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6: Status 404 returned error can't find the container with id 76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6 Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.414167 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.428351 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.433873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.448500 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.468237 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.489881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.494456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.507927 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.528325 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.551923 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.552621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.568184 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.569229 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.571642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf59w\" (UniqueName: \"kubernetes.io/projected/9922f280-ff61-424a-a336-769c0cfb5da2-kube-api-access-rf59w\") pod \"openshift-apiserver-operator-796bbdcf4f-gkjqg\" (UID: \"9922f280-ff61-424a-a336-769c0cfb5da2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.588058 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.593318 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:07 crc kubenswrapper[4869]: W0202 14:36:07.610002 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992c2b96_5783_4865_a47d_167caf91e241.slice/crio-92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365 WatchSource:0}: Error finding container 92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365: Status 404 returned error can't find the container with id 92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365 Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.614830 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.628425 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.648015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.650303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.668795 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.683579 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x5lbr"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.688137 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.708634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.728187 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.748217 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.765125 4869 request.go:700] Waited for 1.88742985s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.768399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.789603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" event={"ID":"0b597927-2943-4e1a-bac5-1266d539e8f8","Type":"ContainerStarted","Data":"d0d20fb4b187a12a2a79cba7bb06c0a5f41f9056f50e4b03ce3097299f9c33b1"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.789696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qx2qt" event={"ID":"0b597927-2943-4e1a-bac5-1266d539e8f8","Type":"ContainerStarted","Data":"9909c443f73f0529408e05055bf9cbd5ac2d26461ece1c2a09e1cb5216a0b581"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.790207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.793692 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" event={"ID":"0fb104b8-53b8-45dd-8406-206d6ba5a250","Type":"ContainerStarted","Data":"9d536b5002fb4c5739cdec4594a0130f7ca05a5e01a90ec55afb667f0d115aee"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.795718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerStarted","Data":"92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.803991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" event={"ID":"0bef80e9-27d1-43c4-9a1f-4a86b2effe23","Type":"ContainerStarted","Data":"76aa7562aa54b7cf851bdfa539174e1a5d61390b4d0163ac290903646d675bd6"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.806713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerStarted","Data":"16f76cd6bf05f6fb4f402ecc35e901805472a099619bf8e10a27be6e93584f89"} Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.812710 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.828604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4hhbx"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.833390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.833449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.833795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: E0202 14:36:07.834157 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.334141267 +0000 UTC m=+169.978778037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.834943 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835096 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.835887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsgdg\" (UniqueName: \"kubernetes.io/projected/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-kube-api-access-lsgdg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.838784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.839132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.847492 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.868112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.930079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pm4x8"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.935734 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.940760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a72caff3-6c15-4b44-9821-ed7b30a13b58-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzwd\" (UniqueName: \"kubernetes.io/projected/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-kube-api-access-mnzwd\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bgr\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-kube-api-access-z5bgr\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b506ef-4fcb-4bdc-bf47-f875c04441c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9f98e83-4853-4d43-bf81-09795442acc8-metrics-tls\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-mountpoint-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-cabundle\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkhc\" (UniqueName: \"kubernetes.io/projected/e1a1dc5f-b886-4775-a090-0fe774fb23ed-kube-api-access-gjkhc\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941480 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-certs\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/debcc43e-e06f-486a-af8c-6a9d4d553913-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-profile-collector-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d00dceb-f9c4-4c49-a631-ea69008c387a-metrics-tls\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31732c2e-e945-4fb4-b471-175489c076c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941616 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b506ef-4fcb-4bdc-bf47-f875c04441c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941656 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclxx\" (UniqueName: \"kubernetes.io/projected/cc58cc97-069b-4691-88ed-cc2788096a6e-kube-api-access-jclxx\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsgdg\" (UniqueName: \"kubernetes.io/projected/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-kube-api-access-lsgdg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941720 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2cef1c-ff45-4005-8550-4d87d4601dbd-serving-cert\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-config\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a76e81a-7f92-4baf-9604-1e1c011da3a0-tmpfs\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941774 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31732c2e-e945-4fb4-b471-175489c076c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdqc\" (UniqueName: \"kubernetes.io/projected/0e414f83-c91b-4997-8cb3-3e200f62e45a-kube-api-access-9tdqc\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-config\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-srv-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72v6\" (UniqueName: \"kubernetes.io/projected/3d2cef1c-ff45-4005-8550-4d87d4601dbd-kube-api-access-q72v6\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f1c29-72b6-4768-8245-c5db262d052a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.941991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-config\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942010 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4fgx\" (UniqueName: \"kubernetes.io/projected/6ea4b230-5ebc-4712-88e0-ce48acfc4785-kube-api-access-w4fgx\") pod \"migrator-59844c95c7-7kwts\" (UID: \"6ea4b230-5ebc-4712-88e0-ce48acfc4785\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942029 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31732c2e-e945-4fb4-b471-175489c076c4-config\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942068 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f89cdf2d-50e4-4089-8345-f11f7791826d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrcc\" (UniqueName: \"kubernetes.io/projected/f75d2e36-7785-4a76-8dfb-55227d418d19-kube-api-access-mwrcc\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9q8t\" (UniqueName: \"kubernetes.io/projected/7c9fade4-43f8-4b81-90de-876b5fac7b4c-kube-api-access-k9q8t\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdrt\" (UniqueName: \"kubernetes.io/projected/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-kube-api-access-hzdrt\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942156 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d00dceb-f9c4-4c49-a631-ea69008c387a-trusted-ca\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e73f227e-ad7c-4212-abd9-e844916c0a17-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942204 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78dr\" (UniqueName: \"kubernetes.io/projected/a549ee44-8319-4980-ac57-9f0c8f169784-kube-api-access-h78dr\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942253 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a1dc5f-b886-4775-a090-0fe774fb23ed-config\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942269 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-plugins-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4wcc\" (UniqueName: \"kubernetes.io/projected/bedd3f8b-6013-48a0-a84e-5c9760146d70-kube-api-access-h4wcc\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942301 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debcc43e-e06f-486a-af8c-6a9d4d553913-serving-cert\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-key\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-csi-data-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt4w\" (UniqueName: \"kubernetes.io/projected/f89cdf2d-50e4-4089-8345-f11f7791826d-kube-api-access-lxt4w\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942426 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-trusted-ca\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942462 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-default-certificate\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.981119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: E0202 14:36:07.986132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.486086429 +0000 UTC m=+170.130723199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.988536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.942481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a72caff3-6c15-4b44-9821-ed7b30a13b58-proxy-tls\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.992759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.992842 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccjx\" (UniqueName: \"kubernetes.io/projected/f9f98e83-4853-4d43-bf81-09795442acc8-kube-api-access-2ccjx\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.993083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/2f135077-03c5-46c5-a9c0-603837453e1c-kube-api-access-ktjpr\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.993225 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-metrics-certs\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: E0202 14:36:07.996670 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.496643369 +0000 UTC m=+170.141280139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.996935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt9sd\" (UniqueName: \"kubernetes.io/projected/f62540d0-1acd-4266-9738-f0fdc72f47d0-kube-api-access-rt9sd\") pod \"downloads-7954f5f757-zqdwm\" (UID: \"f62540d0-1acd-4266-9738-f0fdc72f47d0\") " pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.996991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfkh\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-kube-api-access-6jfkh\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997036 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e414f83-c91b-4997-8cb3-3e200f62e45a-cert\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b506ef-4fcb-4bdc-bf47-f875c04441c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzxg\" (UniqueName: \"kubernetes.io/projected/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-kube-api-access-ttzxg\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-node-bootstrap-token\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqj8z\" (UniqueName: \"kubernetes.io/projected/8a76e81a-7f92-4baf-9604-1e1c011da3a0-kube-api-access-rqj8z\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzqw\" (UniqueName: \"kubernetes.io/projected/ca2f1c29-72b6-4768-8245-c5db262d052a-kube-api-access-4rzqw\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997939 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e73f227e-ad7c-4212-abd9-e844916c0a17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.997979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.998001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f135077-03c5-46c5-a9c0-603837453e1c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.998327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspcl\" (UniqueName: \"kubernetes.io/projected/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-kube-api-access-wspcl\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999069 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-stats-auth\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr246\" (UniqueName: \"kubernetes.io/projected/debcc43e-e06f-486a-af8c-6a9d4d553913-kube-api-access-mr246\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999190 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-images\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999218 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hmc\" (UniqueName: \"kubernetes.io/projected/a72caff3-6c15-4b44-9821-ed7b30a13b58-kube-api-access-82hmc\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-images\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25w8v\" (UniqueName: \"kubernetes.io/projected/0ade6e3e-6274-4469-af6f-39455fd721db-kube-api-access-25w8v\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a1dc5f-b886-4775-a090-0fe774fb23ed-serving-cert\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-socket-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f75d2e36-7785-4a76-8dfb-55227d418d19-proxy-tls\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999412 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f135077-03c5-46c5-a9c0-603837453e1c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-srv-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a549ee44-8319-4980-ac57-9f0c8f169784-service-ca-bundle\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999569 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ade6e3e-6274-4469-af6f-39455fd721db-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-config\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-registration-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999757 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9f98e83-4853-4d43-bf81-09795442acc8-config-volume\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-service-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:07 crc kubenswrapper[4869]: I0202 14:36:07.999805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-client\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:07.999843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-serving-cert\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.001533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.001962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.004139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.006493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.009361 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.020690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsgdg\" (UniqueName: \"kubernetes.io/projected/6aacb2d9-48ca-4f95-9153-8f4338b4a16c-kube-api-access-lsgdg\") pod \"openshift-controller-manager-operator-756b6f6bc6-9znt6\" (UID: \"6aacb2d9-48ca-4f95-9153-8f4338b4a16c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.026628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.027251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.028117 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg"] Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.041011 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9922f280_ff61_424a_a336_769c0cfb5da2.slice/crio-30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0 WatchSource:0}: Error finding container 30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0: Status 404 returned error can't find the container with id 30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.049647 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67"] Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.053542 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6"] Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.098084 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf3c5a5_da3e_43dc_b8dc_a02b3fd32804.slice/crio-05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db WatchSource:0}: Error finding container 05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db: Status 404 returned error can't find the container with id 05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.098316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31732c2e-e945-4fb4-b471-175489c076c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tdqc\" (UniqueName: \"kubernetes.io/projected/0e414f83-c91b-4997-8cb3-3e200f62e45a-kube-api-access-9tdqc\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.100933 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.600887343 +0000 UTC m=+170.245524113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.100985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72v6\" (UniqueName: \"kubernetes.io/projected/3d2cef1c-ff45-4005-8550-4d87d4601dbd-kube-api-access-q72v6\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101122 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-config\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-srv-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-config\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4fgx\" (UniqueName: \"kubernetes.io/projected/6ea4b230-5ebc-4712-88e0-ce48acfc4785-kube-api-access-w4fgx\") pod \"migrator-59844c95c7-7kwts\" (UID: \"6ea4b230-5ebc-4712-88e0-ce48acfc4785\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31732c2e-e945-4fb4-b471-175489c076c4-config\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f1c29-72b6-4768-8245-c5db262d052a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f89cdf2d-50e4-4089-8345-f11f7791826d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrcc\" (UniqueName: \"kubernetes.io/projected/f75d2e36-7785-4a76-8dfb-55227d418d19-kube-api-access-mwrcc\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9q8t\" (UniqueName: \"kubernetes.io/projected/7c9fade4-43f8-4b81-90de-876b5fac7b4c-kube-api-access-k9q8t\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101397 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdrt\" (UniqueName: \"kubernetes.io/projected/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-kube-api-access-hzdrt\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e73f227e-ad7c-4212-abd9-e844916c0a17-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d00dceb-f9c4-4c49-a631-ea69008c387a-trusted-ca\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78dr\" (UniqueName: \"kubernetes.io/projected/a549ee44-8319-4980-ac57-9f0c8f169784-kube-api-access-h78dr\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101531 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-plugins-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4wcc\" (UniqueName: \"kubernetes.io/projected/bedd3f8b-6013-48a0-a84e-5c9760146d70-kube-api-access-h4wcc\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a1dc5f-b886-4775-a090-0fe774fb23ed-config\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-key\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debcc43e-e06f-486a-af8c-6a9d4d553913-serving-cert\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101630 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-csi-data-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101648 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxt4w\" (UniqueName: \"kubernetes.io/projected/f89cdf2d-50e4-4089-8345-f11f7791826d-kube-api-access-lxt4w\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101696 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-trusted-ca\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a72caff3-6c15-4b44-9821-ed7b30a13b58-proxy-tls\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-default-certificate\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ccjx\" (UniqueName: \"kubernetes.io/projected/f9f98e83-4853-4d43-bf81-09795442acc8-kube-api-access-2ccjx\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101854 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt9sd\" (UniqueName: \"kubernetes.io/projected/f62540d0-1acd-4266-9738-f0fdc72f47d0-kube-api-access-rt9sd\") pod \"downloads-7954f5f757-zqdwm\" (UID: \"f62540d0-1acd-4266-9738-f0fdc72f47d0\") " pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101877 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/2f135077-03c5-46c5-a9c0-603837453e1c-kube-api-access-ktjpr\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101896 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-metrics-certs\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101967 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfkh\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-kube-api-access-6jfkh\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.101988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e414f83-c91b-4997-8cb3-3e200f62e45a-cert\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzxg\" (UniqueName: \"kubernetes.io/projected/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-kube-api-access-ttzxg\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b506ef-4fcb-4bdc-bf47-f875c04441c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-node-bootstrap-token\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqj8z\" (UniqueName: \"kubernetes.io/projected/8a76e81a-7f92-4baf-9604-1e1c011da3a0-kube-api-access-rqj8z\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102119 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e73f227e-ad7c-4212-abd9-e844916c0a17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102144 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzqw\" (UniqueName: \"kubernetes.io/projected/ca2f1c29-72b6-4768-8245-c5db262d052a-kube-api-access-4rzqw\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f135077-03c5-46c5-a9c0-603837453e1c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wspcl\" (UniqueName: \"kubernetes.io/projected/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-kube-api-access-wspcl\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-stats-auth\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr246\" (UniqueName: \"kubernetes.io/projected/debcc43e-e06f-486a-af8c-6a9d4d553913-kube-api-access-mr246\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-images\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102357 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hmc\" (UniqueName: \"kubernetes.io/projected/a72caff3-6c15-4b44-9821-ed7b30a13b58-kube-api-access-82hmc\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-images\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25w8v\" (UniqueName: \"kubernetes.io/projected/0ade6e3e-6274-4469-af6f-39455fd721db-kube-api-access-25w8v\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102481 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a1dc5f-b886-4775-a090-0fe774fb23ed-serving-cert\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31732c2e-e945-4fb4-b471-175489c076c4-config\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f75d2e36-7785-4a76-8dfb-55227d418d19-proxy-tls\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-socket-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f135077-03c5-46c5-a9c0-603837453e1c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-srv-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102631 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a549ee44-8319-4980-ac57-9f0c8f169784-service-ca-bundle\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ade6e3e-6274-4469-af6f-39455fd721db-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102701 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-config\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-registration-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9f98e83-4853-4d43-bf81-09795442acc8-config-volume\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-service-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-client\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-serving-cert\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a72caff3-6c15-4b44-9821-ed7b30a13b58-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102872 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnzwd\" (UniqueName: \"kubernetes.io/projected/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-kube-api-access-mnzwd\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5bgr\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-kube-api-access-z5bgr\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b506ef-4fcb-4bdc-bf47-f875c04441c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102967 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9f98e83-4853-4d43-bf81-09795442acc8-metrics-tls\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-mountpoint-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-cabundle\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkhc\" (UniqueName: \"kubernetes.io/projected/e1a1dc5f-b886-4775-a090-0fe774fb23ed-kube-api-access-gjkhc\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-certs\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103153 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/debcc43e-e06f-486a-af8c-6a9d4d553913-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d00dceb-f9c4-4c49-a631-ea69008c387a-metrics-tls\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31732c2e-e945-4fb4-b471-175489c076c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103203 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-profile-collector-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103226 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b506ef-4fcb-4bdc-bf47-f875c04441c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclxx\" (UniqueName: \"kubernetes.io/projected/cc58cc97-069b-4691-88ed-cc2788096a6e-kube-api-access-jclxx\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103321 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2cef1c-ff45-4005-8550-4d87d4601dbd-serving-cert\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-config\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.103375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a76e81a-7f92-4baf-9604-1e1c011da3a0-tmpfs\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.104157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-csi-data-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.105283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-socket-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.105647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31732c2e-e945-4fb4-b471-175489c076c4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.105893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a76e81a-7f92-4baf-9604-1e1c011da3a0-tmpfs\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.106435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-srv-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.102178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-config\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.107752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.108263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-images\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a1dc5f-b886-4775-a090-0fe774fb23ed-serving-cert\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.110871 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a1dc5f-b886-4775-a090-0fe774fb23ed-config\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113702 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f75d2e36-7785-4a76-8dfb-55227d418d19-proxy-tls\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.111588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-plugins-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.112081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d2cef1c-ff45-4005-8550-4d87d4601dbd-trusted-ca\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.112291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-default-certificate\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.112673 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.612652373 +0000 UTC m=+170.257289143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.112758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-config\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f135077-03c5-46c5-a9c0-603837453e1c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.111405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f1c29-72b6-4768-8245-c5db262d052a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.111467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0ade6e3e-6274-4469-af6f-39455fd721db-images\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f89cdf2d-50e4-4089-8345-f11f7791826d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113248 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e73f227e-ad7c-4212-abd9-e844916c0a17-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.113901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.114568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.114749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0e414f83-c91b-4997-8cb3-3e200f62e45a-cert\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a72caff3-6c15-4b44-9821-ed7b30a13b58-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a72caff3-6c15-4b44-9821-ed7b30a13b58-proxy-tls\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115402 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.115734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9f98e83-4853-4d43-bf81-09795442acc8-config-volume\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.116043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-config\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.116425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.109287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-metrics-certs\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.116764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-service-ca\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f75d2e36-7785-4a76-8dfb-55227d418d19-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a549ee44-8319-4980-ac57-9f0c8f169784-service-ca-bundle\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-registration-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.117653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bedd3f8b-6013-48a0-a84e-5c9760146d70-mountpoint-dir\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66b506ef-4fcb-4bdc-bf47-f875c04441c0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-srv-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d00dceb-f9c4-4c49-a631-ea69008c387a-trusted-ca\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118822 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/debcc43e-e06f-486a-af8c-6a9d4d553913-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.118831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-config\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.119318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e73f227e-ad7c-4212-abd9-e844916c0a17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.119993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.120257 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-cabundle\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.120952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.121585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.122646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ade6e3e-6274-4469-af6f-39455fd721db-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f9f98e83-4853-4d43-bf81-09795442acc8-metrics-tls\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a76e81a-7f92-4baf-9604-1e1c011da3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.124219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f135077-03c5-46c5-a9c0-603837453e1c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.123750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-node-bootstrap-token\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.124860 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a549ee44-8319-4980-ac57-9f0c8f169784-stats-auth\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.125477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7c9fade4-43f8-4b81-90de-876b5fac7b4c-certs\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.125548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-etcd-client\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.126755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc58cc97-069b-4691-88ed-cc2788096a6e-signing-key\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.128709 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.128736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/66b506ef-4fcb-4bdc-bf47-f875c04441c0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.129951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/debcc43e-e06f-486a-af8c-6a9d4d553913-serving-cert\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-serving-cert\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1d00dceb-f9c4-4c49-a631-ea69008c387a-metrics-tls\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d2cef1c-ff45-4005-8550-4d87d4601dbd-serving-cert\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.130728 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.131161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-profile-collector-cert\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.144726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tdqc\" (UniqueName: \"kubernetes.io/projected/0e414f83-c91b-4997-8cb3-3e200f62e45a-kube-api-access-9tdqc\") pod \"ingress-canary-z4jh5\" (UID: \"0e414f83-c91b-4997-8cb3-3e200f62e45a\") " pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.171059 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72v6\" (UniqueName: \"kubernetes.io/projected/3d2cef1c-ff45-4005-8550-4d87d4601dbd-kube-api-access-q72v6\") pod \"console-operator-58897d9998-dxvvv\" (UID: \"3d2cef1c-ff45-4005-8550-4d87d4601dbd\") " pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.188255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9q8t\" (UniqueName: \"kubernetes.io/projected/7c9fade4-43f8-4b81-90de-876b5fac7b4c-kube-api-access-k9q8t\") pod \"machine-config-server-245rt\" (UID: \"7c9fade4-43f8-4b81-90de-876b5fac7b4c\") " pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.204077 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.204266 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.704230595 +0000 UTC m=+170.348867365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.204487 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4fgx\" (UniqueName: \"kubernetes.io/projected/6ea4b230-5ebc-4712-88e0-ce48acfc4785-kube-api-access-w4fgx\") pod \"migrator-59844c95c7-7kwts\" (UID: \"6ea4b230-5ebc-4712-88e0-ce48acfc4785\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.204803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.206532 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.70650466 +0000 UTC m=+170.351141420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.222345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"marketplace-operator-79b997595-xl8hj\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.223965 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4jh5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.246187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzqw\" (UniqueName: \"kubernetes.io/projected/ca2f1c29-72b6-4768-8245-c5db262d052a-kube-api-access-4rzqw\") pod \"package-server-manager-789f6589d5-znb54\" (UID: \"ca2f1c29-72b6-4768-8245-c5db262d052a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.255226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-245rt" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.265729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfkh\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-kube-api-access-6jfkh\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.285346 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrcc\" (UniqueName: \"kubernetes.io/projected/f75d2e36-7785-4a76-8dfb-55227d418d19-kube-api-access-mwrcc\") pod \"machine-config-operator-74547568cd-jhvz8\" (UID: \"f75d2e36-7785-4a76-8dfb-55227d418d19\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.305267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdrt\" (UniqueName: \"kubernetes.io/projected/5daf4eab-ca30-4ea4-9eb0-6cc5f06877df-kube-api-access-hzdrt\") pod \"catalog-operator-68c6474976-cvd9s\" (UID: \"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.306503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.307135 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.806893439 +0000 UTC m=+170.451530209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.308508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.308700 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.310419 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.810400816 +0000 UTC m=+170.455037586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.319572 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.325421 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b506ef-4fcb-4bdc-bf47-f875c04441c0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vx9ts\" (UID: \"66b506ef-4fcb-4bdc-bf47-f875c04441c0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.325684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.346268 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9fade4_43f8_4b81_90de_876b5fac7b4c.slice/crio-9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2 WatchSource:0}: Error finding container 9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2: Status 404 returned error can't find the container with id 9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.370814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"route-controller-manager-6576b87f9c-wkkx2\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.370884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzxg\" (UniqueName: \"kubernetes.io/projected/18ef05f5-ba54-4dfe-adeb-32ed86dfce28-kube-api-access-ttzxg\") pod \"olm-operator-6b444d44fb-mm87w\" (UID: \"18ef05f5-ba54-4dfe-adeb-32ed86dfce28\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.397618 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hmc\" (UniqueName: \"kubernetes.io/projected/a72caff3-6c15-4b44-9821-ed7b30a13b58-kube-api-access-82hmc\") pod \"machine-config-controller-84d6567774-xkblm\" (UID: \"a72caff3-6c15-4b44-9821-ed7b30a13b58\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.399812 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6"] Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.409343 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.409586 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.909529233 +0000 UTC m=+170.554166003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.410200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.411027 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:08.911007889 +0000 UTC m=+170.555644669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.412448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxt4w\" (UniqueName: \"kubernetes.io/projected/f89cdf2d-50e4-4089-8345-f11f7791826d-kube-api-access-lxt4w\") pod \"control-plane-machine-set-operator-78cbb6b69f-l692p\" (UID: \"f89cdf2d-50e4-4089-8345-f11f7791826d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.417428 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.424433 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqj8z\" (UniqueName: \"kubernetes.io/projected/8a76e81a-7f92-4baf-9604-1e1c011da3a0-kube-api-access-rqj8z\") pod \"packageserver-d55dfcdfc-wnc44\" (UID: \"8a76e81a-7f92-4baf-9604-1e1c011da3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.426266 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:08 crc kubenswrapper[4869]: W0202 14:36:08.435589 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aacb2d9_48ca_4f95_9153_8f4338b4a16c.slice/crio-5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d WatchSource:0}: Error finding container 5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d: Status 404 returned error can't find the container with id 5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.446754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78dr\" (UniqueName: \"kubernetes.io/projected/a549ee44-8319-4980-ac57-9f0c8f169784-kube-api-access-h78dr\") pod \"router-default-5444994796-snfqj\" (UID: \"a549ee44-8319-4980-ac57-9f0c8f169784\") " pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.450147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.461021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.471629 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.473391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25w8v\" (UniqueName: \"kubernetes.io/projected/0ade6e3e-6274-4469-af6f-39455fd721db-kube-api-access-25w8v\") pod \"machine-api-operator-5694c8668f-whptb\" (UID: \"0ade6e3e-6274-4469-af6f-39455fd721db\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.483273 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.484549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4wcc\" (UniqueName: \"kubernetes.io/projected/bedd3f8b-6013-48a0-a84e-5c9760146d70-kube-api-access-h4wcc\") pod \"csi-hostpathplugin-kdq4v\" (UID: \"bedd3f8b-6013-48a0-a84e-5c9760146d70\") " pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.494188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4jh5"] Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.503602 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.507624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ccjx\" (UniqueName: \"kubernetes.io/projected/f9f98e83-4853-4d43-bf81-09795442acc8-kube-api-access-2ccjx\") pod \"dns-default-mcwnk\" (UID: \"f9f98e83-4853-4d43-bf81-09795442acc8\") " pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.512435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.512691 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.012638089 +0000 UTC m=+170.657274859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.512942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.513434 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.013406868 +0000 UTC m=+170.658043638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.516489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.526142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt9sd\" (UniqueName: \"kubernetes.io/projected/f62540d0-1acd-4266-9738-f0fdc72f47d0-kube-api-access-rt9sd\") pod \"downloads-7954f5f757-zqdwm\" (UID: \"f62540d0-1acd-4266-9738-f0fdc72f47d0\") " pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.549415 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.558536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktjpr\" (UniqueName: \"kubernetes.io/projected/2f135077-03c5-46c5-a9c0-603837453e1c-kube-api-access-ktjpr\") pod \"kube-storage-version-migrator-operator-b67b599dd-7h9lk\" (UID: \"2f135077-03c5-46c5-a9c0-603837453e1c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.575269 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.590464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.596264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/31732c2e-e945-4fb4-b471-175489c076c4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6fd6q\" (UID: \"31732c2e-e945-4fb4-b471-175489c076c4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.614722 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.614874 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.114815732 +0000 UTC m=+170.759452492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.615258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.615871 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.115853018 +0000 UTC m=+170.760489788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.618731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.644843 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.645981 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.653620 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.659880 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.664251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnzwd\" (UniqueName: \"kubernetes.io/projected/90d2d2e9-b85f-46b8-b768-a59ebd9fd423-kube-api-access-mnzwd\") pod \"etcd-operator-b45778765-m44c2\" (UID: \"90d2d2e9-b85f-46b8-b768-a59ebd9fd423\") " pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.669925 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.670003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wspcl\" (UniqueName: \"kubernetes.io/projected/b1cf41b3-7232-4a16-ad7f-0a686f1653dd-kube-api-access-wspcl\") pod \"multus-admission-controller-857f4d67dd-p9cvf\" (UID: \"b1cf41b3-7232-4a16-ad7f-0a686f1653dd\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.703002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5bgr\" (UniqueName: \"kubernetes.io/projected/1d00dceb-f9c4-4c49-a631-ea69008c387a-kube-api-access-z5bgr\") pod \"ingress-operator-5b745b69d9-9rsqs\" (UID: \"1d00dceb-f9c4-4c49-a631-ea69008c387a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.705658 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c40fc5ef-7c09-46e1-808d-f388cba3a5e3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-r954c\" (UID: \"c40fc5ef-7c09-46e1-808d-f388cba3a5e3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.710334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.716491 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.716657 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.216629846 +0000 UTC m=+170.861266626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.716900 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.717364 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.217356373 +0000 UTC m=+170.861993133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.723748 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclxx\" (UniqueName: \"kubernetes.io/projected/cc58cc97-069b-4691-88ed-cc2788096a6e-kube-api-access-jclxx\") pod \"service-ca-9c57cc56f-t8vv5\" (UID: \"cc58cc97-069b-4691-88ed-cc2788096a6e\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.730119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"collect-profiles-29500710-2vmgv\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.730198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.740385 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.746955 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e73f227e-ad7c-4212-abd9-e844916c0a17-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hcxlq\" (UID: \"e73f227e-ad7c-4212-abd9-e844916c0a17\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.777752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkhc\" (UniqueName: \"kubernetes.io/projected/e1a1dc5f-b886-4775-a090-0fe774fb23ed-kube-api-access-gjkhc\") pod \"service-ca-operator-777779d784-lkcc2\" (UID: \"e1a1dc5f-b886-4775-a090-0fe774fb23ed\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.783358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr246\" (UniqueName: \"kubernetes.io/projected/debcc43e-e06f-486a-af8c-6a9d4d553913-kube-api-access-mr246\") pod \"openshift-config-operator-7777fb866f-hjpd4\" (UID: \"debcc43e-e06f-486a-af8c-6a9d4d553913\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.792938 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.818002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.819178 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.319147587 +0000 UTC m=+170.963784357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.840424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4jh5" event={"ID":"0e414f83-c91b-4997-8cb3-3e200f62e45a","Type":"ContainerStarted","Data":"f68b18e01951bee20d5ad62beb1695c5dc733a1de35699be75bcfedbca173c7e"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.848251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" event={"ID":"6aacb2d9-48ca-4f95-9153-8f4338b4a16c","Type":"ContainerStarted","Data":"5dc9b6eb08b3b2a5162275e9458d4b896361027db1de0bd1e0e6e9052f46a00d"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.854345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerStarted","Data":"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.877589 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" event={"ID":"0bef80e9-27d1-43c4-9a1f-4a86b2effe23","Type":"ContainerStarted","Data":"bf26bdf8aee31f6fbbb4edaf16894afa8066e5e4ca4a25971d51c5e065ee63ff"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.877649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" event={"ID":"0bef80e9-27d1-43c4-9a1f-4a86b2effe23","Type":"ContainerStarted","Data":"24b9b1880ef9ed33fa8d9bb45282da2fb75bb55ecd62003d404271e536976623"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.884684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" event={"ID":"9922f280-ff61-424a-a336-769c0cfb5da2","Type":"ContainerStarted","Data":"2b51bcbb85d8472751355858ef6cc92f5966ef873355b4087900f8a831c03133"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.885474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" event={"ID":"9922f280-ff61-424a-a336-769c0cfb5da2","Type":"ContainerStarted","Data":"30f998d369401c48a9cb14c97ff2199f0c0ff3877f27412682cd41fab6cb73d0"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.890193 4869 generic.go:334] "Generic (PLEG): container finished" podID="aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804" containerID="3486b6e56d27275d69a67f88155309502e48009b2bc86d502be592fe3bea07bb" exitCode=0 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.890294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" event={"ID":"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804","Type":"ContainerDied","Data":"3486b6e56d27275d69a67f88155309502e48009b2bc86d502be592fe3bea07bb"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.890333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" event={"ID":"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804","Type":"ContainerStarted","Data":"05c541b5fb87668031fdd72e896a3bc99c1d87cc9d223ad7767b25528bc3b5db"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.893832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" event={"ID":"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f","Type":"ContainerStarted","Data":"3b1b61802d93cd5c7c479af3b71a8d217bc71bbb0e14188d2aafd4662337373c"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.893892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" event={"ID":"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f","Type":"ContainerStarted","Data":"d7f0c9fd23834a043f720eec366729ea6c97a4e56370e8110b05b1c34cecd5a8"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.903367 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerStarted","Data":"4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.903445 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.906847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-245rt" event={"ID":"7c9fade4-43f8-4b81-90de-876b5fac7b4c","Type":"ContainerStarted","Data":"9cec97abbb7bf422588b8e0d50f5b664457daedbf2502f2ef1dca09ffae879e2"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.913376 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-snmjm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.913526 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.916273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" event={"ID":"0fb104b8-53b8-45dd-8406-206d6ba5a250","Type":"ContainerStarted","Data":"021299cc13546b3f383ba488e2cafe7486ef37ed6c0eca198fd06c72bf8210ed"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.918256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-snfqj" event={"ID":"a549ee44-8319-4980-ac57-9f0c8f169784","Type":"ContainerStarted","Data":"ad5f248a948412d08a9279057eb39d56e5b75334a121a2e61b307630af16d2b8"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.920068 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:08 crc kubenswrapper[4869]: E0202 14:36:08.920620 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.420603301 +0000 UTC m=+171.065240071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.921162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerStarted","Data":"35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.921224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerStarted","Data":"e0e031e07f3777bf084c57bd2ad11cca8d11083d95a8cbf49d91d2ce2ed3c4ce"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.921568 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.925714 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.925768 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.926817 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" event={"ID":"dae3c559-c92e-45a1-8e66-383dee4460cd","Type":"ContainerStarted","Data":"475122a0b994fa79d5f3dd602b29797fe199c5e8506b565b2ad726b9bcc7d313"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.926871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" event={"ID":"dae3c559-c92e-45a1-8e66-383dee4460cd","Type":"ContainerStarted","Data":"259282048e007b5f2976df9faef40982b18fa21eaa64efe8568ad33302a63d2d"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.930121 4869 generic.go:334] "Generic (PLEG): container finished" podID="78130644-70b6-4285-9ca7-e5a671bd1111" containerID="4099c71b08581568ecd4efafcfae076d9ebd7bdba6d5418d35fcbab38fc6794f" exitCode=0 Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.931425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerDied","Data":"4099c71b08581568ecd4efafcfae076d9ebd7bdba6d5418d35fcbab38fc6794f"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.931475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerStarted","Data":"e200de564724006535d5b993c357e3923e2157ea97fa6f6141e1672dfbaf45b4"} Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.931576 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.979193 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.980171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" Feb 02 14:36:08 crc kubenswrapper[4869]: I0202 14:36:08.987937 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.006832 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.022733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.026769 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.526722231 +0000 UTC m=+171.171359001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.067831 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gkjqg" podStartSLOduration=147.067805715 podStartE2EDuration="2m27.067805715s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.063201222 +0000 UTC m=+170.707837992" watchObservedRunningTime="2026-02-02 14:36:09.067805715 +0000 UTC m=+170.712442485" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.070372 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dxvvv"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.126897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.127606 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.627577901 +0000 UTC m=+171.272214671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.181035 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pm4x8" podStartSLOduration=147.18100967 podStartE2EDuration="2m27.18100967s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.180480708 +0000 UTC m=+170.825117478" watchObservedRunningTime="2026-02-02 14:36:09.18100967 +0000 UTC m=+170.825646440" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.234507 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.234732 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.734701557 +0000 UTC m=+171.379338327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.235710 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.236361 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.736322906 +0000 UTC m=+171.380959676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.266429 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ptmkd" podStartSLOduration=147.266404969 podStartE2EDuration="2m27.266404969s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.262737139 +0000 UTC m=+170.907373939" watchObservedRunningTime="2026-02-02 14:36:09.266404969 +0000 UTC m=+170.911041739" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.337176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.337640 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.837587296 +0000 UTC m=+171.482224066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.440059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.440511 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:09.940494397 +0000 UTC m=+171.585131177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.542247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.542785 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.042763642 +0000 UTC m=+171.687400412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.562126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.625997 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gv86n" podStartSLOduration=147.625964186 podStartE2EDuration="2m27.625964186s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.619095206 +0000 UTC m=+171.263731976" watchObservedRunningTime="2026-02-02 14:36:09.625964186 +0000 UTC m=+171.270600956" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.647283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.648269 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.148251186 +0000 UTC m=+171.792887966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.710878 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.714199 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.727011 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts"] Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.748373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.748829 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.248797079 +0000 UTC m=+171.893433849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: W0202 14:36:09.757478 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5daf4eab_ca30_4ea4_9eb0_6cc5f06877df.slice/crio-78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7 WatchSource:0}: Error finding container 78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7: Status 404 returned error can't find the container with id 78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7 Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.786165 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qx2qt" podStartSLOduration=147.786136851 podStartE2EDuration="2m27.786136851s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.784792688 +0000 UTC m=+171.429429478" watchObservedRunningTime="2026-02-02 14:36:09.786136851 +0000 UTC m=+171.430773621" Feb 02 14:36:09 crc kubenswrapper[4869]: W0202 14:36:09.815154 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75d2e36_7785_4a76_8dfb_55227d418d19.slice/crio-5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c WatchSource:0}: Error finding container 5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c: Status 404 returned error can't find the container with id 5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.851492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.851960 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.351942786 +0000 UTC m=+171.996579556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.898252 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" podStartSLOduration=147.898216888 podStartE2EDuration="2m27.898216888s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:09.870428502 +0000 UTC m=+171.515065262" watchObservedRunningTime="2026-02-02 14:36:09.898216888 +0000 UTC m=+171.542853658" Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.952786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:09 crc kubenswrapper[4869]: E0202 14:36:09.954057 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.454033836 +0000 UTC m=+172.098670596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.969022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-snfqj" event={"ID":"a549ee44-8319-4980-ac57-9f0c8f169784","Type":"ContainerStarted","Data":"35d87baf44583a98f4382cfd19d7f9ed312b1d2fff154a551bb87f7bdb8e09be"} Feb 02 14:36:09 crc kubenswrapper[4869]: I0202 14:36:09.995277 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4jh5" event={"ID":"0e414f83-c91b-4997-8cb3-3e200f62e45a","Type":"ContainerStarted","Data":"c8f98d231007cd54932c402080e605ab6217a19ab274c06493e5ae9aee3283e8"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.018953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" event={"ID":"1b6ec461-dbfb-4c98-9e2b-0946363a2f1f","Type":"ContainerStarted","Data":"f14c3d76b43ca6897f530064913e262d5e368ee2078f33c8b96634f28866bf0e"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.063973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.069132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.569110998 +0000 UTC m=+172.213747768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.079933 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-245rt" event={"ID":"7c9fade4-43f8-4b81-90de-876b5fac7b4c","Type":"ContainerStarted","Data":"83c30a5bae358f2d5eeaef9e90bacc6e4d4e85b599e85292ed9599dae6e574f4"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.086063 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.090486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.092801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" event={"ID":"0fb104b8-53b8-45dd-8406-206d6ba5a250","Type":"ContainerStarted","Data":"b634570d4b57aa186c6ad6fde832ed5506971ec301154cef1fa3228b98685ea1"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.095607 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" event={"ID":"6aacb2d9-48ca-4f95-9153-8f4338b4a16c","Type":"ContainerStarted","Data":"f60b75498b78b8e1d9cc016298eb48c46aa35a328a6f9623b5ed8f151ff061f4"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.100827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" event={"ID":"6ea4b230-5ebc-4712-88e0-ce48acfc4785","Type":"ContainerStarted","Data":"12224d7b4868f3fbdaa05a1f8ea9b38f4b88f351d1b341503889cdc6f1e2b977"} Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.108813 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b506ef_4fcb_4bdc_bf47_f875c04441c0.slice/crio-b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab WatchSource:0}: Error finding container b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab: Status 404 returned error can't find the container with id b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.131661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" event={"ID":"aaf3c5a5-da3e-43dc-b8dc-a02b3fd32804","Type":"ContainerStarted","Data":"de17559b548103ed4151d27f45d7c40673f6fe65a49238ba71e888fd2ea0d5f7"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.143570 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.161605 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.166040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.166467 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.66643855 +0000 UTC m=+172.311075320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.167809 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" event={"ID":"8a76e81a-7f92-4baf-9604-1e1c011da3a0","Type":"ContainerStarted","Data":"99fe4a5be62a6ad1018e541152f4fb564364b1354bc8972cc30de89c4885e368"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.172223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.173505 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.673478614 +0000 UTC m=+172.318115544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.182366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" event={"ID":"3d2cef1c-ff45-4005-8550-4d87d4601dbd","Type":"ContainerStarted","Data":"3df94f7612a3b09263565ce4d388a5ed4804818685a2c16751a5f4a9aeb282a1"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.182420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" event={"ID":"3d2cef1c-ff45-4005-8550-4d87d4601dbd","Type":"ContainerStarted","Data":"7ac04d0c3040d8996b230ec821decd6f44849e71608bd7abdb92a90e32cd2c53"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.183484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.187156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" event={"ID":"f75d2e36-7785-4a76-8dfb-55227d418d19","Type":"ContainerStarted","Data":"5f53b7ae560a1defb1250afbd9da6e468fd2a610f7851883f37a71ed4b1c2d8c"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.191620 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kdq4v"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.193993 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dxvvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.194056 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" podUID="3d2cef1c-ff45-4005-8550-4d87d4601dbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.202935 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q"] Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.215134 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f135077_03c5_46c5_a9c0_603837453e1c.slice/crio-83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967 WatchSource:0}: Error finding container 83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967: Status 404 returned error can't find the container with id 83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.231145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.240808 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zqdwm"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.263784 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podStartSLOduration=148.263734513 podStartE2EDuration="2m28.263734513s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.161257982 +0000 UTC m=+171.805894752" watchObservedRunningTime="2026-02-02 14:36:10.263734513 +0000 UTC m=+171.908371293" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.273269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.273872 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.773840692 +0000 UTC m=+172.418477452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.275986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.276629 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.77660225 +0000 UTC m=+172.421239230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.284160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerStarted","Data":"b9283d0cd7c9c1a92ff238d01fa62272096457af7dad6776f1218dfdbaa71354"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.313559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" event={"ID":"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df","Type":"ContainerStarted","Data":"78a320908538974d04a95155c353079e9c6ffd43086bfc3504554a43475e51d7"} Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mcwnk"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314482 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314557 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.314889 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.315350 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cvd9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.315398 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" podUID="5daf4eab-ca30-4ea4-9eb0-6cc5f06877df" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.378234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.379820 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.879747096 +0000 UTC m=+172.524383876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.451044 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-m44c2"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.484793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.486645 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:10.986620496 +0000 UTC m=+172.631257266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.505340 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.535699 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.535745 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.541524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.546496 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.550058 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.558575 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.564162 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whptb"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.564188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.565628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.577522 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.578754 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8vv5"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.585658 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.585992 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.085975539 +0000 UTC m=+172.730612309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.588357 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs"] Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.591171 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d2d2e9_b85f_46b8_b768_a59ebd9fd423.slice/crio-ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9 WatchSource:0}: Error finding container ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9: Status 404 returned error can't find the container with id ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.593081 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.599500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-p9cvf"] Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.611870 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" podStartSLOduration=148.611854467 podStartE2EDuration="2m28.611854467s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.609469909 +0000 UTC m=+172.254106679" watchObservedRunningTime="2026-02-02 14:36:10.611854467 +0000 UTC m=+172.256491237" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.629348 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9znt6" podStartSLOduration=148.629324599 podStartE2EDuration="2m28.629324599s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.626580711 +0000 UTC m=+172.271217491" watchObservedRunningTime="2026-02-02 14:36:10.629324599 +0000 UTC m=+172.273961369" Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.677404 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddebcc43e_e06f_486a_af8c_6a9d4d553913.slice/crio-64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21 WatchSource:0}: Error finding container 64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21: Status 404 returned error can't find the container with id 64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.678015 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" podStartSLOduration=147.6779875 podStartE2EDuration="2m27.6779875s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.677799265 +0000 UTC m=+172.322436035" watchObservedRunningTime="2026-02-02 14:36:10.6779875 +0000 UTC m=+172.322624280" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.687766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.693502 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.193461473 +0000 UTC m=+172.838098243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.702278 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d00dceb_f9c4_4c49_a631_ea69008c387a.slice/crio-104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306 WatchSource:0}: Error finding container 104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306: Status 404 returned error can't find the container with id 104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306 Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.742349 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-x5lbr" podStartSLOduration=148.742330908 podStartE2EDuration="2m28.742330908s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.710260797 +0000 UTC m=+172.354897567" watchObservedRunningTime="2026-02-02 14:36:10.742330908 +0000 UTC m=+172.386967668" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.748411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.760144 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40fc5ef_7c09_46e1_808d_f388cba3a5e3.slice/crio-f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618 WatchSource:0}: Error finding container f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618: Status 404 returned error can't find the container with id f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618 Feb 02 14:36:10 crc kubenswrapper[4869]: W0202 14:36:10.760392 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ade6e3e_6274_4469_af6f_39455fd721db.slice/crio-035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c WatchSource:0}: Error finding container 035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c: Status 404 returned error can't find the container with id 035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.790188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.790642 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-245rt" podStartSLOduration=5.790621611 podStartE2EDuration="5.790621611s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.742709188 +0000 UTC m=+172.387345958" watchObservedRunningTime="2026-02-02 14:36:10.790621611 +0000 UTC m=+172.435258381" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.791084 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.291057831 +0000 UTC m=+172.935694601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.791326 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z4jh5" podStartSLOduration=5.791322718 podStartE2EDuration="5.791322718s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.78773226 +0000 UTC m=+172.432369030" watchObservedRunningTime="2026-02-02 14:36:10.791322718 +0000 UTC m=+172.435959488" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.889082 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-snfqj" podStartSLOduration=148.889060841 podStartE2EDuration="2m28.889060841s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.82985856 +0000 UTC m=+172.474495340" watchObservedRunningTime="2026-02-02 14:36:10.889060841 +0000 UTC m=+172.533697611" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.889444 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" podStartSLOduration=147.889434391 podStartE2EDuration="2m27.889434391s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.886896918 +0000 UTC m=+172.531533688" watchObservedRunningTime="2026-02-02 14:36:10.889434391 +0000 UTC m=+172.534071161" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.893563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.893926 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.393898041 +0000 UTC m=+173.038534811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.922273 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ttkq6" podStartSLOduration=148.922239521 podStartE2EDuration="2m28.922239521s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:10.921392999 +0000 UTC m=+172.566029769" watchObservedRunningTime="2026-02-02 14:36:10.922239521 +0000 UTC m=+172.566876311" Feb 02 14:36:10 crc kubenswrapper[4869]: I0202 14:36:10.998457 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:10 crc kubenswrapper[4869]: E0202 14:36:10.999080 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.499055517 +0000 UTC m=+173.143692287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.080997 4869 csr.go:261] certificate signing request csr-4j5dp is approved, waiting to be issued Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.093260 4869 csr.go:257] certificate signing request csr-4j5dp is issued Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.101103 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.101525 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.601508897 +0000 UTC m=+173.246145667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.213414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.214550 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.714507837 +0000 UTC m=+173.359144627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.214679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.215204 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.715194663 +0000 UTC m=+173.359831433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.315295 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.315708 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.815663954 +0000 UTC m=+173.460300844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.403969 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" event={"ID":"ca2f1c29-72b6-4768-8245-c5db262d052a","Type":"ContainerStarted","Data":"f037ecfa342889bc8e77c537f34c53f1db93c81d926d77561e653a5b5f5edc53"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.404037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" event={"ID":"ca2f1c29-72b6-4768-8245-c5db262d052a","Type":"ContainerStarted","Data":"9c12ce938552da76d5c5f3887e84a70163c30f4a298a458bc8fa949bcb0c1eb9"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.410725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" event={"ID":"2f135077-03c5-46c5-a9c0-603837453e1c","Type":"ContainerStarted","Data":"ade710e51ea34e3e3b68afb62334d5ffcdaf25851bce2e4cec3c13311c984917"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.410785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" event={"ID":"2f135077-03c5-46c5-a9c0-603837453e1c","Type":"ContainerStarted","Data":"83a3f10b8c2c997ac5b03bd38677c33a94bfd3925c3c2498b6d74e438d3ad967"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.416858 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.417347 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:11.917333014 +0000 UTC m=+173.561969784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.422352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" event={"ID":"c40fc5ef-7c09-46e1-808d-f388cba3a5e3","Type":"ContainerStarted","Data":"f43a6ee746ecf98b90ca8804c537a481fa80af061053ea49d9e80bd5ef5ee618"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.463313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" event={"ID":"a72caff3-6c15-4b44-9821-ed7b30a13b58","Type":"ContainerStarted","Data":"ae63f9c0d62409fa2fe4bd2555bab62088ba66bba2674df5a3b0c4a41613c2f8"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.463398 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" event={"ID":"a72caff3-6c15-4b44-9821-ed7b30a13b58","Type":"ContainerStarted","Data":"6c1b344c606bf0165834920bf58b76dc064bc2dcc3268e6992270dc8daca2c86"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.497153 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerStarted","Data":"64ff9adab44c0f1a58c22d3d6e6e049a5e43dadea0c49572833ddf02862aad21"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.499237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" event={"ID":"6ea4b230-5ebc-4712-88e0-ce48acfc4785","Type":"ContainerStarted","Data":"48b00c29c217ddb68b1a5a87370d742f0fae5a672e3347d48d36f30c5aa0722d"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.504400 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" event={"ID":"90d2d2e9-b85f-46b8-b768-a59ebd9fd423","Type":"ContainerStarted","Data":"ce7f63badb24e05223ca294598e40ae82df21b8c3cf6df677c6caaf0d0c37ba9"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.515068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mcwnk" event={"ID":"f9f98e83-4853-4d43-bf81-09795442acc8","Type":"ContainerStarted","Data":"ad88062a40c9996c635fd2e473d95bdc62b642e212f6b82b7a05c63976249527"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.518269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.521105 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.021046815 +0000 UTC m=+173.665683585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.524178 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:11 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:11 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:11 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.524407 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.531207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"f55cc82aaa2d8bf4dcd503e18bf7ad8d0b3fae62bcea25e83cdb617c7fc6764b"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.532229 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" event={"ID":"1d00dceb-f9c4-4c49-a631-ea69008c387a","Type":"ContainerStarted","Data":"104326e0965902be6346d974daaf4f58a782a44ad7786cf764b87875e2400306"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.546384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" event={"ID":"0ade6e3e-6274-4469-af6f-39455fd721db","Type":"ContainerStarted","Data":"035a86284c8086780e24e2b4c98eb0f1dc2aacdb070239b9b9d5b1fe1ab8996c"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.553228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerStarted","Data":"86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.553280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerStarted","Data":"abf150712433e6a69bcdbac96eb8f5a7e4f4678220a199cb5fef1de1079707b8"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.553299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.559739 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xl8hj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.559800 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.573303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerStarted","Data":"ebbb35a369b9723fdfeb34f546ac806481285e12e0053e2c255a12c42d7b4ce5"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.582707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" event={"ID":"78130644-70b6-4285-9ca7-e5a671bd1111","Type":"ContainerStarted","Data":"e3a339061df5e2b8d2778dd0a6334b4aea0b9e977556e43022ce4cb22949d68a"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.609952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" event={"ID":"f75d2e36-7785-4a76-8dfb-55227d418d19","Type":"ContainerStarted","Data":"050e31b91b5c7dedb132b86359245fc27b27608b0eca63aea8d88b7743f2102c"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.628056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.628533 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.128515948 +0000 UTC m=+173.773152718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.628791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerStarted","Data":"b3271718de5d10823c1d8cb58a92daa70441d4c0775319d6b1e4703935350e20"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.646486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" event={"ID":"e1a1dc5f-b886-4775-a090-0fe774fb23ed","Type":"ContainerStarted","Data":"9a84fa8773f7e9db4e69af3f2e4d4a7f1d9c4fa3d59d3f393762bacab2a6e295"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.686631 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" event={"ID":"b1cf41b3-7232-4a16-ad7f-0a686f1653dd","Type":"ContainerStarted","Data":"6e68930c6153f915b6348da6c34758a9e61c28fb9d9f8ea15c928685e6fa7eaa"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.702195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" event={"ID":"cc58cc97-069b-4691-88ed-cc2788096a6e","Type":"ContainerStarted","Data":"b6f4a2048a87e6162f6fa89fc21de966dcd24b8e327545bd8e8222d7be8856e4"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.728953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" event={"ID":"5daf4eab-ca30-4ea4-9eb0-6cc5f06877df","Type":"ContainerStarted","Data":"695ac9f52b74597d91419d9495d815281ffe5909b9759b0c54c81a9a495ced4a"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.733891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" event={"ID":"18ef05f5-ba54-4dfe-adeb-32ed86dfce28","Type":"ContainerStarted","Data":"07691362d822347b63329bcaddc3fa54623ad2dc54914261820216d6f58bea84"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.735507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerStarted","Data":"8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.735544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerStarted","Data":"d23da6374bff6b7548ad4e5c369db95c776162120875c848b3e93ff08178cc90"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.736474 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737342 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cvd9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737404 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" podUID="5daf4eab-ca30-4ea4-9eb0-6cc5f06877df" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737794 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.737881 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.738211 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.740078 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.240054032 +0000 UTC m=+173.884690812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.742238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" event={"ID":"66b506ef-4fcb-4bdc-bf47-f875c04441c0","Type":"ContainerStarted","Data":"d1fdfac94c4c8e5070c0087162537744ebf696ef57e2c9dbf6436561d3332c70"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.742294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" event={"ID":"66b506ef-4fcb-4bdc-bf47-f875c04441c0","Type":"ContainerStarted","Data":"b1afad876e6ed59371949ca690c9be076c296157567deb4db55b6d1b5af60fab"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.749524 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podStartSLOduration=148.749496345 podStartE2EDuration="2m28.749496345s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.748716305 +0000 UTC m=+173.393353075" watchObservedRunningTime="2026-02-02 14:36:11.749496345 +0000 UTC m=+173.394133115" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.774460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" event={"ID":"e73f227e-ad7c-4212-abd9-e844916c0a17","Type":"ContainerStarted","Data":"a4f243ea36089108322d4774b6549376f8fc0975b0592e48b51a9217c0a2c5a4"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.774524 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" event={"ID":"e73f227e-ad7c-4212-abd9-e844916c0a17","Type":"ContainerStarted","Data":"b36aaa97185006a917472ea03de586d6f90904ce211371d7414969abd8f9b5ef"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.793435 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" event={"ID":"f89cdf2d-50e4-4089-8345-f11f7791826d","Type":"ContainerStarted","Data":"44d49cb542cb7e83665ac7047938745e010bed6c9bb57eedbf13e90ff0bb7b43"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.793525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" event={"ID":"f89cdf2d-50e4-4089-8345-f11f7791826d","Type":"ContainerStarted","Data":"c522a244dc219f2146fe9387acd94329baa87923bbfc07b4d56bf7d9e6bf93d6"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.803899 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" podStartSLOduration=148.803880648 podStartE2EDuration="2m28.803880648s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.796888975 +0000 UTC m=+173.441525745" watchObservedRunningTime="2026-02-02 14:36:11.803880648 +0000 UTC m=+173.448517418" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.808694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" event={"ID":"8a76e81a-7f92-4baf-9604-1e1c011da3a0","Type":"ContainerStarted","Data":"8dcfd1eaef857715f398fc182b60f85d2107322e48bbc0dae26b995242b9ba42"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.809830 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.819641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" event={"ID":"31732c2e-e945-4fb4-b471-175489c076c4","Type":"ContainerStarted","Data":"42d93e3cbc22074c8226f82035e7a4b8ff016cab9732728da5e2ecc14ab3f7ad"} Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.821740 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-dxvvv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.821802 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" podUID="3d2cef1c-ff45-4005-8550-4d87d4601dbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.841558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.843993 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.343975817 +0000 UTC m=+173.988612587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.853007 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnc44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.853056 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" podUID="8a76e81a-7f92-4baf-9604-1e1c011da3a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.872426 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7h9lk" podStartSLOduration=149.87240848 podStartE2EDuration="2m29.87240848s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.869853667 +0000 UTC m=+173.514490437" watchObservedRunningTime="2026-02-02 14:36:11.87240848 +0000 UTC m=+173.517045250" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.921125 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" podStartSLOduration=149.921098131 podStartE2EDuration="2m29.921098131s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.920557878 +0000 UTC m=+173.565194658" watchObservedRunningTime="2026-02-02 14:36:11.921098131 +0000 UTC m=+173.565734901" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.950981 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" podStartSLOduration=148.950958989 podStartE2EDuration="2m28.950958989s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.946296504 +0000 UTC m=+173.590933274" watchObservedRunningTime="2026-02-02 14:36:11.950958989 +0000 UTC m=+173.595595759" Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.955617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.957824 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.457803668 +0000 UTC m=+174.102440438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.964290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:11 crc kubenswrapper[4869]: E0202 14:36:11.970580 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.470559743 +0000 UTC m=+174.115196693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:11 crc kubenswrapper[4869]: I0202 14:36:11.974101 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vx9ts" podStartSLOduration=149.97408361 podStartE2EDuration="2m29.97408361s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:11.973630189 +0000 UTC m=+173.618266959" watchObservedRunningTime="2026-02-02 14:36:11.97408361 +0000 UTC m=+173.618720370" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.048211 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zqdwm" podStartSLOduration=150.048192809 podStartE2EDuration="2m30.048192809s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.020556817 +0000 UTC m=+173.665193587" watchObservedRunningTime="2026-02-02 14:36:12.048192809 +0000 UTC m=+173.692829579" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.056538 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-l692p" podStartSLOduration=149.056516845 podStartE2EDuration="2m29.056516845s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.047232065 +0000 UTC m=+173.691868835" watchObservedRunningTime="2026-02-02 14:36:12.056516845 +0000 UTC m=+173.701153615" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.066010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.066542 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.566520502 +0000 UTC m=+174.211157272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.084990 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hcxlq" podStartSLOduration=150.084965307 podStartE2EDuration="2m30.084965307s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.075285868 +0000 UTC m=+173.719922638" watchObservedRunningTime="2026-02-02 14:36:12.084965307 +0000 UTC m=+173.729602097" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.094792 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 14:31:11 +0000 UTC, rotation deadline is 2026-11-23 02:01:11.556120623 +0000 UTC Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.094824 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7043h24m59.461298363s for next certificate rotation Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.168571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.169089 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.669064763 +0000 UTC m=+174.313701533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.275085 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.275722 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.775693587 +0000 UTC m=+174.420330357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.377590 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.378086 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.878069694 +0000 UTC m=+174.522706464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.478851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.479153 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.979109169 +0000 UTC m=+174.623745939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.479237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.479956 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:12.979891727 +0000 UTC m=+174.624528498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.495148 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.495222 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.512853 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:12 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:12 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:12 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.512938 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.553055 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.553103 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.554323 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4hhbx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.554410 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" podUID="78130644-70b6-4285-9ca7-e5a671bd1111" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.580101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.580373 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.080323027 +0000 UTC m=+174.724959797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.682514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.683036 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.183014882 +0000 UTC m=+174.827651652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.725329 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.785551 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.785748 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.285708799 +0000 UTC m=+174.930345569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.785819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.786246 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.286238021 +0000 UTC m=+174.930874791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.842547 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerStarted","Data":"e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.850738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jhvz8" event={"ID":"f75d2e36-7785-4a76-8dfb-55227d418d19","Type":"ContainerStarted","Data":"ef8720acbbb0cde38cd11f20ddc3b5bbe8043425fdcbdb9c0466357c3eb84c72"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.856491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerStarted","Data":"cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.857481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.862382 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerStarted","Data":"79a060c65a071c8a6eac94dc82b8c5d175aa78c407291049a9ac6b9c662bbb68"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.868145 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.868184 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.882781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" event={"ID":"1d00dceb-f9c4-4c49-a631-ea69008c387a","Type":"ContainerStarted","Data":"e7b47fb05dc07563c6e17e3f38cda928b37cca11fcd6eb86f6712a8323f47042"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.886871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.887228 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.387212404 +0000 UTC m=+175.031849174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.889283 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" podStartSLOduration=150.889262575 podStartE2EDuration="2m30.889262575s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:12.883066482 +0000 UTC m=+174.527703252" watchObservedRunningTime="2026-02-02 14:36:12.889262575 +0000 UTC m=+174.533899345" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.895273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" event={"ID":"0ade6e3e-6274-4469-af6f-39455fd721db","Type":"ContainerStarted","Data":"5bdb58e3c8554e2e107e0a7bd7602f9f2c1fb7c1de002538f6347cee6a529395"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.896439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" event={"ID":"b1cf41b3-7232-4a16-ad7f-0a686f1653dd","Type":"ContainerStarted","Data":"f37dab21bb8c799a1fb48bfe7e098a1d7a0a48c1c7e0f9758ad1f7da6a9820fd"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.919448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" event={"ID":"18ef05f5-ba54-4dfe-adeb-32ed86dfce28","Type":"ContainerStarted","Data":"0b0d898dea99ae6130b83a51c70ce6a281543fdcf40703ef20b467bd4b5016f4"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.921673 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.921748 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mm87w container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.921782 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" podUID="18ef05f5-ba54-4dfe-adeb-32ed86dfce28" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.977372 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mcwnk" event={"ID":"f9f98e83-4853-4d43-bf81-09795442acc8","Type":"ContainerStarted","Data":"4cc7d7ac633cd7881e6e9539601545b2ba3d9d5a888752312433e2fd7df21bf0"} Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.994061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" event={"ID":"90d2d2e9-b85f-46b8-b768-a59ebd9fd423","Type":"ContainerStarted","Data":"40517cccc8efefaf1477fcaf7a8cd3a66f7382893197e2ea8c5536d52860bf2c"} Feb 02 14:36:12 crc kubenswrapper[4869]: E0202 14:36:12.990263 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.490225398 +0000 UTC m=+175.134862168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:12 crc kubenswrapper[4869]: I0202 14:36:12.989737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.031687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" event={"ID":"c40fc5ef-7c09-46e1-808d-f388cba3a5e3","Type":"ContainerStarted","Data":"9d4299dd4ee149891ee67857fd20408464197200a25f1484ca8f9abbe611699c"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.036430 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" event={"ID":"e1a1dc5f-b886-4775-a090-0fe774fb23ed","Type":"ContainerStarted","Data":"df007b47b50059c9e35f662246defb9d24cdf2981d4b8eebd50d0d27504470a2"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.047225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" event={"ID":"ca2f1c29-72b6-4768-8245-c5db262d052a","Type":"ContainerStarted","Data":"faa857b149c345bd8bfa07adb91b3ffbe87eccda487e78297704f8b5002e9979"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.048350 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.050806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" event={"ID":"31732c2e-e945-4fb4-b471-175489c076c4","Type":"ContainerStarted","Data":"372d1ca5d39707b24abc420abf781fd41d51eddec701ad88b11b90dd08baed28"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.072352 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podStartSLOduration=150.072331155 podStartE2EDuration="2m30.072331155s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.006951151 +0000 UTC m=+174.651587921" watchObservedRunningTime="2026-02-02 14:36:13.072331155 +0000 UTC m=+174.716967925" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.095610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.095865 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.595819414 +0000 UTC m=+175.240456184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.096353 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.097048 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.597025384 +0000 UTC m=+175.241662154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.116027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" event={"ID":"a72caff3-6c15-4b44-9821-ed7b30a13b58","Type":"ContainerStarted","Data":"795f26ae7c23f2ca59379d8d860dbf52ed4a817bae1c93536e11d7327f2b272a"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.136003 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-r954c" podStartSLOduration=151.135980356 podStartE2EDuration="2m31.135980356s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.116561727 +0000 UTC m=+174.761198497" watchObservedRunningTime="2026-02-02 14:36:13.135980356 +0000 UTC m=+174.780617126" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.145238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" event={"ID":"cc58cc97-069b-4691-88ed-cc2788096a6e","Type":"ContainerStarted","Data":"ac62dba72a848cdafce7b31bdccf24a47e3c364fd51e800d5894de97bac8717d"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.160381 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6fd6q" podStartSLOduration=151.160363498 podStartE2EDuration="2m31.160363498s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.159517058 +0000 UTC m=+174.804153838" watchObservedRunningTime="2026-02-02 14:36:13.160363498 +0000 UTC m=+174.805000268" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.182391 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" event={"ID":"6ea4b230-5ebc-4712-88e0-ce48acfc4785","Type":"ContainerStarted","Data":"7729c375d16c72e8236ce14da691bfecff9d17b641c9dead88ab01677f5f85e3"} Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.183213 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.183271 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184023 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnc44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184137 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xl8hj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184130 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" podUID="8a76e81a-7f92-4baf-9604-1e1c011da3a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.184192 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.199316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.200500 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.700484489 +0000 UTC m=+175.345121259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.208965 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lkcc2" podStartSLOduration=150.208932197 podStartE2EDuration="2m30.208932197s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.194221165 +0000 UTC m=+174.838857925" watchObservedRunningTime="2026-02-02 14:36:13.208932197 +0000 UTC m=+174.853568967" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.212852 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8c67" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.219640 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cvd9s" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.254414 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" podStartSLOduration=150.25438091 podStartE2EDuration="2m30.25438091s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.252047762 +0000 UTC m=+174.896684522" watchObservedRunningTime="2026-02-02 14:36:13.25438091 +0000 UTC m=+174.899017680" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.289806 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" podStartSLOduration=150.289777884 podStartE2EDuration="2m30.289777884s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.287018976 +0000 UTC m=+174.931655746" watchObservedRunningTime="2026-02-02 14:36:13.289777884 +0000 UTC m=+174.934414664" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.301834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.306845 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.806825694 +0000 UTC m=+175.451462464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.325555 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-m44c2" podStartSLOduration=151.325519786 podStartE2EDuration="2m31.325519786s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.321641751 +0000 UTC m=+174.966278521" watchObservedRunningTime="2026-02-02 14:36:13.325519786 +0000 UTC m=+174.970156546" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.412640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.415434 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.915386695 +0000 UTC m=+175.560023465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.418689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.419702 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:13.919681971 +0000 UTC m=+175.564318741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.420843 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7kwts" podStartSLOduration=151.420786828 podStartE2EDuration="2m31.420786828s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.417542418 +0000 UTC m=+175.062179208" watchObservedRunningTime="2026-02-02 14:36:13.420786828 +0000 UTC m=+175.065423598" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.481857 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t8vv5" podStartSLOduration=150.481828125 podStartE2EDuration="2m30.481828125s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.477122609 +0000 UTC m=+175.121759389" watchObservedRunningTime="2026-02-02 14:36:13.481828125 +0000 UTC m=+175.126464895" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.518342 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:13 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:13 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:13 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.518470 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.520011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.520810 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.020786007 +0000 UTC m=+175.665422787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.608779 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xkblm" podStartSLOduration=150.608753199 podStartE2EDuration="2m30.608753199s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:13.603485669 +0000 UTC m=+175.248122429" watchObservedRunningTime="2026-02-02 14:36:13.608753199 +0000 UTC m=+175.253389969" Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.621956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.622550 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.122525669 +0000 UTC m=+175.767162439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.723079 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.723680 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.223658176 +0000 UTC m=+175.868294946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.825419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.826145 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.326120286 +0000 UTC m=+175.970757056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:13 crc kubenswrapper[4869]: I0202 14:36:13.926939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:13 crc kubenswrapper[4869]: E0202 14:36:13.927365 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.427342724 +0000 UTC m=+176.071979494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.029387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.029882 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.529864235 +0000 UTC m=+176.174501005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.131348 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.131801 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.63171126 +0000 UTC m=+176.276348040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.132275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.132988 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.632952721 +0000 UTC m=+176.277589491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.223401 4869 generic.go:334] "Generic (PLEG): container finished" podID="debcc43e-e06f-486a-af8c-6a9d4d553913" containerID="79a060c65a071c8a6eac94dc82b8c5d175aa78c407291049a9ac6b9c662bbb68" exitCode=0 Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.223486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerDied","Data":"79a060c65a071c8a6eac94dc82b8c5d175aa78c407291049a9ac6b9c662bbb68"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.234929 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.235459 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.735429341 +0000 UTC m=+176.380066111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.235851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"431fcb70cc98461d103c7d616c03636fbcbfad85bee6bb13d436e2e8654f0988"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.235979 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.236665 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.736647761 +0000 UTC m=+176.381284531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.249256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" event={"ID":"1d00dceb-f9c4-4c49-a631-ea69008c387a","Type":"ContainerStarted","Data":"dd97d4a06a90cd2cda4f8644b12c3149169049a2f7ded09da0000e4775e24d6f"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.255357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" event={"ID":"0ade6e3e-6274-4469-af6f-39455fd721db","Type":"ContainerStarted","Data":"2fe24b2358acc507cb164f64c0ef048b0918ef9839bfde0a0b2b8cdbf6f926ca"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.261212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" event={"ID":"b1cf41b3-7232-4a16-ad7f-0a686f1653dd","Type":"ContainerStarted","Data":"216312852a9f884101982c9754e0108b3105ec374289ba9e25ba29f1e483c3a5"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.272426 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.272723 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.273329 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mcwnk" event={"ID":"f9f98e83-4853-4d43-bf81-09795442acc8","Type":"ContainerStarted","Data":"06922e5520bf22f7f5d842b5c1203fcdfd0d3eb01fafd05a614a43cd41b01c4e"} Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.275703 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.280964 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mm87w container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.281060 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" podUID="18ef05f5-ba54-4dfe-adeb-32ed86dfce28" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.337539 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.338107 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.838071126 +0000 UTC m=+176.482707946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.338732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.343120 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.843096189 +0000 UTC m=+176.487732959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.386789 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9rsqs" podStartSLOduration=152.386756937 podStartE2EDuration="2m32.386756937s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.383401684 +0000 UTC m=+176.028038454" watchObservedRunningTime="2026-02-02 14:36:14.386756937 +0000 UTC m=+176.031393707" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.440397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.443867 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:14.943834286 +0000 UTC m=+176.588471216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.476978 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mcwnk" podStartSLOduration=9.476941574 podStartE2EDuration="9.476941574s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.437708646 +0000 UTC m=+176.082345446" watchObservedRunningTime="2026-02-02 14:36:14.476941574 +0000 UTC m=+176.121578344" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.511349 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-whptb" podStartSLOduration=151.511316342 podStartE2EDuration="2m31.511316342s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.479070726 +0000 UTC m=+176.123707506" watchObservedRunningTime="2026-02-02 14:36:14.511316342 +0000 UTC m=+176.155953112" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.512073 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:14 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:14 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:14 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.512151 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.512827 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-p9cvf" podStartSLOduration=151.512819219 podStartE2EDuration="2m31.512819219s" podCreationTimestamp="2026-02-02 14:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:14.510451042 +0000 UTC m=+176.155087812" watchObservedRunningTime="2026-02-02 14:36:14.512819219 +0000 UTC m=+176.157455989" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.544228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.544810 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.044785809 +0000 UTC m=+176.689422579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.585204 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnc44" Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.645611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.645861 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.145832933 +0000 UTC m=+176.790469703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.646183 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.646621 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.146611843 +0000 UTC m=+176.791248613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.750469 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.751236 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.251204606 +0000 UTC m=+176.895841376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.852643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.853238 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.353219193 +0000 UTC m=+176.997855963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.955570 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.955744 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.455710075 +0000 UTC m=+177.100346855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:14 crc kubenswrapper[4869]: I0202 14:36:14.956009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:14 crc kubenswrapper[4869]: E0202 14:36:14.956506 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.456491453 +0000 UTC m=+177.101128223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.057508 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.057755 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.557707283 +0000 UTC m=+177.202344053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.058305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.058830 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.5588182 +0000 UTC m=+177.203454970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.124782 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.126005 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.128648 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.143687 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.160595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.161052 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.661031523 +0000 UTC m=+177.305668293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.262381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.262813 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.762797496 +0000 UTC m=+177.407434266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.278704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" event={"ID":"debcc43e-e06f-486a-af8c-6a9d4d553913","Type":"ContainerStarted","Data":"08b218f97c320580457a90382097567e64a984def27625ca3e5653ef269c19ed"} Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.279548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.304617 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.304694 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.321309 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mm87w" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.327732 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" podStartSLOduration=153.327707948 podStartE2EDuration="2m33.327707948s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:15.315150508 +0000 UTC m=+176.959787278" watchObservedRunningTime="2026-02-02 14:36:15.327707948 +0000 UTC m=+176.972344718" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.352592 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.354688 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.354853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.364262 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.369634 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.370379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.370534 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.370629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.371802 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.871774957 +0000 UTC m=+177.516411727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.373254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.384056 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.434851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"certified-operators-g6crm\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.444612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.478598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.479106 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:15.979085676 +0000 UTC m=+177.623722456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.515463 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:15 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:15 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:15 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.516033 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.543094 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.544658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.580778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.582237 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.082213682 +0000 UTC m=+177.726850452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.582795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.583077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.592168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.630191 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"community-operators-h9pgx\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.684589 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.184568159 +0000 UTC m=+177.829204939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.684840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.684976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.685055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.685098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.696402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.748292 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.749838 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.757270 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.789660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.789918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790028 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.790624 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.290607118 +0000 UTC m=+177.935243878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.790943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.843119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"certified-operators-9xjnr\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893180 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.893199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.893529 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.393515778 +0000 UTC m=+178.038152548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.905177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.915615 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.995963 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.996186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.996251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.996273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: E0202 14:36:15.996633 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.496618834 +0000 UTC m=+178.141255604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.997879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:15 crc kubenswrapper[4869]: I0202 14:36:15.998183 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.033736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"community-operators-cm44g\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.090821 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.098739 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.099435 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.599415532 +0000 UTC m=+178.244052312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.202431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.203163 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.703147783 +0000 UTC m=+178.347784543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.278202 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.304008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.304455 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.804421793 +0000 UTC m=+178.449058553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.368930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"fa36614e15907890a42ef404912d31f1c698eb5a63732a6a7df259babae4ecab"} Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.387638 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.413463 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.413880 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:16.913862005 +0000 UTC m=+178.558498775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.518984 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:16 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:16 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:16 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.519422 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.521302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.523625 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.023607675 +0000 UTC m=+178.668244445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.624441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.624897 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.124875115 +0000 UTC m=+178.769511875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.728431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.731379 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.231346934 +0000 UTC m=+178.875983704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.767700 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.780248 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.841768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.842061 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.342028617 +0000 UTC m=+178.986665387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.842379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.842970 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.34296124 +0000 UTC m=+178.987598010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.943575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.943811 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.443773399 +0000 UTC m=+179.088410169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:16 crc kubenswrapper[4869]: I0202 14:36:16.944030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:16 crc kubenswrapper[4869]: E0202 14:36:16.944439 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.444429795 +0000 UTC m=+179.089066565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.044683 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.044947 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.544894075 +0000 UTC m=+179.189530855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.045071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.045486 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.545470729 +0000 UTC m=+179.190107499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.146104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.146308 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.646281238 +0000 UTC m=+179.290918008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.146395 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.146723 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.646715749 +0000 UTC m=+179.291352509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.247244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.248000 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.747970679 +0000 UTC m=+179.392607459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.338845 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.340118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.345051 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.349379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.349619 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.349879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.350018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.350481 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.850458529 +0000 UTC m=+179.495095349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.360454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.363854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.363933 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.365523 4869 patch_prober.go:28] interesting pod/console-f9d7485db-ptmkd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.365601 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ptmkd" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.396376 4869 generic.go:334] "Generic (PLEG): container finished" podID="e56fa221-6e79-4c96-be0a-17db4803a127" containerID="b2450dd93a7c78de896bbf627e97911c1993d1380dd59859505aa8d294fc3f44" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.396526 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"b2450dd93a7c78de896bbf627e97911c1993d1380dd59859505aa8d294fc3f44"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.396575 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerStarted","Data":"3fdc2755e50c40ab06f7338836dcc4d68f5937d9bf9ebd941d8d98f6a64dcd17"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.401170 4869 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.402351 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.407478 4869 generic.go:334] "Generic (PLEG): container finished" podID="35334030-48c7-4d7e-b202-75371c2c74f0" containerID="cec776d323dbe8236b1c9db4384ebac1fa16daa022330512eaace0844c3b9f88" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.407561 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"cec776d323dbe8236b1c9db4384ebac1fa16daa022330512eaace0844c3b9f88"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.407601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerStarted","Data":"8d9df88387111e57bb9b1545d6cad7ddb2c341d0c3125931bf95ce3cfbbe8249"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.413878 4869 generic.go:334] "Generic (PLEG): container finished" podID="20990512-5147-4de8-95e0-f40e2156f395" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.413982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.414023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerStarted","Data":"63b62c3c310182414e285b775897296c2f662f58b08903ff210519308baba3a6"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.432330 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c21252d-a76f-437f-8611-f42993137df3" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" exitCode=0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.432454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.432494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerStarted","Data":"ab3d419e69ab359ef2eb23e842d3d4f04eb05500497bb827ac7bf3115cbf4af4"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.451346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.453626 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.953584916 +0000 UTC m=+179.598221686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.453843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.454023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.454139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.454345 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:17.954333464 +0000 UTC m=+179.598970414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.454501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.455077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.456484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.456499 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.457645 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"efd99c9a4c72d1179ce8abb941e3dfc8599952e3dae1a7cc1ace6774a6786c46"} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.492019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"redhat-marketplace-wrnr2\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.515204 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:17 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:17 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:17 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.515304 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.566251 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.566526 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 14:36:18.066486543 +0000 UTC m=+179.711123313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.566665 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: E0202 14:36:17.568730 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 14:36:18.068707078 +0000 UTC m=+179.713344038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-42krp" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.586117 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4hhbx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]log ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]etcd ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/max-in-flight-filter ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 14:36:17 crc kubenswrapper[4869]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-startinformers ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 14:36:17 crc kubenswrapper[4869]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 14:36:17 crc kubenswrapper[4869]: livez check failed Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.586212 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" podUID="78130644-70b6-4285-9ca7-e5a671bd1111" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.661116 4869 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T14:36:17.401197022Z","Handler":null,"Name":""} Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.661384 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.667106 4869 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.667314 4869 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.667733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.677197 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.718004 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.744069 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.744325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.773334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.779090 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.779168 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.846178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-42krp\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.876874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.876960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.876984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.878052 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.880296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.897764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.901838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"redhat-marketplace-h4pkg\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:17 crc kubenswrapper[4869]: I0202 14:36:17.960492 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.076811 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.158624 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:36:18 crc kubenswrapper[4869]: W0202 14:36:18.192446 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe54b4f_c3d6_40ec_8d5d_422b6d86ad97.slice/crio-01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf WatchSource:0}: Error finding container 01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf: Status 404 returned error can't find the container with id 01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.211210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hjpd4" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.326099 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.327670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.331803 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.331890 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.331970 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dxvvv" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.346842 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.348444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.351432 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.361349 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.378489 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.486714 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerStarted","Data":"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.486808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerStarted","Data":"01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.488409 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.489168 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.491739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.497481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" event={"ID":"bedd3f8b-6013-48a0-a84e-5c9760146d70","Type":"ContainerStarted","Data":"13780c7c2507648136ea93745567cc7dd4a9423d873dcf52722b800ccb531c6b"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.504580 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.510437 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:18 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:18 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:18 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.510493 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.524844 4869 generic.go:334] "Generic (PLEG): container finished" podID="7bc37994-d436-4a72-93dd-610683ab871f" containerID="cdd5576f9f5156d7b56f7ccd77833310c25ec9af1f7cd6b12b8a45a03d8370d2" exitCode=0 Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.525172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"cdd5576f9f5156d7b56f7ccd77833310c25ec9af1f7cd6b12b8a45a03d8370d2"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.525228 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerStarted","Data":"b1580b4316ca71373b5cb2c825bf6078883c98f4a09960236d48783fdf4eb2b0"} Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.537247 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" podStartSLOduration=156.537212271 podStartE2EDuration="2m36.537212271s" podCreationTimestamp="2026-02-02 14:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:18.524807374 +0000 UTC m=+180.169444144" watchObservedRunningTime="2026-02-02 14:36:18.537212271 +0000 UTC m=+180.181849041" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.538706 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.562614 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kdq4v" podStartSLOduration=13.562574056999999 podStartE2EDuration="13.562574057s" podCreationTimestamp="2026-02-02 14:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:18.55135967 +0000 UTC m=+180.195996440" watchObservedRunningTime="2026-02-02 14:36:18.562574057 +0000 UTC m=+180.207210827" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.576263 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.576350 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:18 crc kubenswrapper[4869]: W0202 14:36:18.576848 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod442e63b3_7f70_4524_b229_aedfb054f395.slice/crio-1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2 WatchSource:0}: Error finding container 1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2: Status 404 returned error can't find the container with id 1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2 Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.578798 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.582824 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593333 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.593500 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.594980 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.595263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.602330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.635164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"redhat-operators-k7wp9\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.650412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.669516 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.684739 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.754392 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.756334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.763529 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.904354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.904414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:18 crc kubenswrapper[4869]: I0202 14:36:18.904512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.006615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.006681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.009177 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.009317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.009309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.033423 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"redhat-operators-9kt6r\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.078640 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.408747 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.525482 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:19 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:19 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:19 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.525550 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.531276 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.531820 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 14:36:19 crc kubenswrapper[4869]: W0202 14:36:19.553801 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff46a125_ff31_42f7_9a16_3eccdd7dd393.slice/crio-b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a WatchSource:0}: Error finding container b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a: Status 404 returned error can't find the container with id b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.587480 4869 generic.go:334] "Generic (PLEG): container finished" podID="442e63b3-7f70-4524-b229-aedfb054f395" containerID="9fde05ff8b3ab7b33bf7fd64de1786d6d6c5b221f2074b9b8d881ce96c0861b1" exitCode=0 Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.589006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"9fde05ff8b3ab7b33bf7fd64de1786d6d6c5b221f2074b9b8d881ce96c0861b1"} Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.589049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerStarted","Data":"1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2"} Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.643539 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerStarted","Data":"4b24ce2f2248f4687d66222d8d64c3f4c7ab1a667da994a65103b5daf7f6074a"} Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.778761 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:36:19 crc kubenswrapper[4869]: W0202 14:36:19.856347 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e119c7_dd08_471f_9800_5bda7b22a6d6.slice/crio-9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883 WatchSource:0}: Error finding container 9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883: Status 404 returned error can't find the container with id 9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883 Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.976316 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.978046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.981741 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.981759 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 14:36:19 crc kubenswrapper[4869]: I0202 14:36:19.988454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.163195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.163375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.264808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.264934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.265094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.311335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.513336 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:20 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:20 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:20 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.513652 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.599346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.647558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerStarted","Data":"4b8dc8f4396db1dcb28c3807745cf0ef5dad421ac82661e4237038d651a54858"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.647621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerStarted","Data":"b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.656301 4869 generic.go:334] "Generic (PLEG): container finished" podID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerID="5761dc2d2fafda3cf6b457c2de25d204c006ac8d85953364b9966521a437f222" exitCode=0 Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.656405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"5761dc2d2fafda3cf6b457c2de25d204c006ac8d85953364b9966521a437f222"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.656446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerStarted","Data":"9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.663666 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerID="5bd8c5ee8e9e88d2880af3adebbdb0e7854ddadb441729295abb6d7e6958afdd" exitCode=0 Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.664263 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"5bd8c5ee8e9e88d2880af3adebbdb0e7854ddadb441729295abb6d7e6958afdd"} Feb 02 14:36:20 crc kubenswrapper[4869]: I0202 14:36:20.698365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.698335108 podStartE2EDuration="2.698335108s" podCreationTimestamp="2026-02-02 14:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:20.672669055 +0000 UTC m=+182.317305825" watchObservedRunningTime="2026-02-02 14:36:20.698335108 +0000 UTC m=+182.342971878" Feb 02 14:36:20 crc kubenswrapper[4869]: E0202 14:36:20.708587 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9815bf_1049_47c8_8eda_cf2602f2eb83.slice/crio-conmon-e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab9815bf_1049_47c8_8eda_cf2602f2eb83.slice/crio-e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f.scope\": RecentStats: unable to find data in memory cache]" Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.057422 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 14:36:21 crc kubenswrapper[4869]: W0202 14:36:21.118320 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0fa6bddf_2294_4b66_816d_1bdaf3cd3c93.slice/crio-56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052 WatchSource:0}: Error finding container 56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052: Status 404 returned error can't find the container with id 56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052 Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.509950 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:21 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:21 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:21 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.510047 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.696776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerStarted","Data":"56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052"} Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.703727 4869 generic.go:334] "Generic (PLEG): container finished" podID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerID="e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f" exitCode=0 Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.703783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerDied","Data":"e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f"} Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.709494 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerID="4b8dc8f4396db1dcb28c3807745cf0ef5dad421ac82661e4237038d651a54858" exitCode=0 Feb 02 14:36:21 crc kubenswrapper[4869]: I0202 14:36:21.709571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerDied","Data":"4b8dc8f4396db1dcb28c3807745cf0ef5dad421ac82661e4237038d651a54858"} Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.512254 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:22 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:22 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:22 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.512338 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.559570 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.564928 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4hhbx" Feb 02 14:36:22 crc kubenswrapper[4869]: I0202 14:36:22.746340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerStarted","Data":"2a21e2516607900a6ee89e7cab6b19874f814d0f0ac5236718de9219148f8503"} Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.179608 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.180876 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.212522 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.212493992 podStartE2EDuration="4.212493992s" podCreationTimestamp="2026-02-02 14:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:22.770385086 +0000 UTC m=+184.415021856" watchObservedRunningTime="2026-02-02 14:36:23.212493992 +0000 UTC m=+184.857130762" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") pod \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322458 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") pod \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322476 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") pod \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\" (UID: \"ff46a125-ff31-42f7-9a16-3eccdd7dd393\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") pod \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.322584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") pod \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\" (UID: \"ab9815bf-1049-47c8-8eda-cf2602f2eb83\") " Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.326306 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff46a125-ff31-42f7-9a16-3eccdd7dd393" (UID: "ff46a125-ff31-42f7-9a16-3eccdd7dd393"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.327039 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab9815bf-1049-47c8-8eda-cf2602f2eb83" (UID: "ab9815bf-1049-47c8-8eda-cf2602f2eb83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.336312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff46a125-ff31-42f7-9a16-3eccdd7dd393" (UID: "ff46a125-ff31-42f7-9a16-3eccdd7dd393"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.348122 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl" (OuterVolumeSpecName: "kube-api-access-wwxkl") pod "ab9815bf-1049-47c8-8eda-cf2602f2eb83" (UID: "ab9815bf-1049-47c8-8eda-cf2602f2eb83"). InnerVolumeSpecName "kube-api-access-wwxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.357195 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab9815bf-1049-47c8-8eda-cf2602f2eb83" (UID: "ab9815bf-1049-47c8-8eda-cf2602f2eb83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424406 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424446 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab9815bf-1049-47c8-8eda-cf2602f2eb83-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424457 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff46a125-ff31-42f7-9a16-3eccdd7dd393-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424465 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab9815bf-1049-47c8-8eda-cf2602f2eb83-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.424475 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwxkl\" (UniqueName: \"kubernetes.io/projected/ab9815bf-1049-47c8-8eda-cf2602f2eb83-kube-api-access-wwxkl\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.509311 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:23 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:23 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:23 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.509429 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.536745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mcwnk" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.804635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" event={"ID":"ab9815bf-1049-47c8-8eda-cf2602f2eb83","Type":"ContainerDied","Data":"ebbb35a369b9723fdfeb34f546ac806481285e12e0053e2c255a12c42d7b4ce5"} Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.804719 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebbb35a369b9723fdfeb34f546ac806481285e12e0053e2c255a12c42d7b4ce5" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.804756 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.826892 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.827244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ff46a125-ff31-42f7-9a16-3eccdd7dd393","Type":"ContainerDied","Data":"b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a"} Feb 02 14:36:23 crc kubenswrapper[4869]: I0202 14:36:23.827273 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0584983882cb169f8da6f9e5a6656795f38eb0e3d5239f1ce0671a66ae53c1a" Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.527848 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:24 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:24 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:24 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.527949 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.847388 4869 generic.go:334] "Generic (PLEG): container finished" podID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerID="2a21e2516607900a6ee89e7cab6b19874f814d0f0ac5236718de9219148f8503" exitCode=0 Feb 02 14:36:24 crc kubenswrapper[4869]: I0202 14:36:24.847815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerDied","Data":"2a21e2516607900a6ee89e7cab6b19874f814d0f0ac5236718de9219148f8503"} Feb 02 14:36:25 crc kubenswrapper[4869]: I0202 14:36:25.507771 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:25 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:25 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:25 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:25 crc kubenswrapper[4869]: I0202 14:36:25.507868 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.228928 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.378296 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") pod \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.378501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") pod \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\" (UID: \"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93\") " Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.379064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" (UID: "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.387260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" (UID: "0fa6bddf-2294-4b66-816d-1bdaf3cd3c93"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.479965 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.480010 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6bddf-2294-4b66-816d-1bdaf3cd3c93-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.509297 4869 patch_prober.go:28] interesting pod/router-default-5444994796-snfqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 14:36:26 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Feb 02 14:36:26 crc kubenswrapper[4869]: [+]process-running ok Feb 02 14:36:26 crc kubenswrapper[4869]: healthz check failed Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.509400 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-snfqj" podUID="a549ee44-8319-4980-ac57-9f0c8f169784" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.879006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0fa6bddf-2294-4b66-816d-1bdaf3cd3c93","Type":"ContainerDied","Data":"56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052"} Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.879059 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ae2dcd4041b2ebc0316366730d1489c76b70b2c81ff6781ab6e12859720052" Feb 02 14:36:26 crc kubenswrapper[4869]: I0202 14:36:26.879172 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.363994 4869 patch_prober.go:28] interesting pod/console-f9d7485db-ptmkd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.364071 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ptmkd" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.507277 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:27 crc kubenswrapper[4869]: I0202 14:36:27.512136 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-snfqj" Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.577380 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.578514 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.577448 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:28 crc kubenswrapper[4869]: I0202 14:36:28.578642 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.745494 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.746653 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" containerID="cri-o://35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4" gracePeriod=30 Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.793439 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.794113 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" containerID="cri-o://cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d" gracePeriod=30 Feb 02 14:36:32 crc kubenswrapper[4869]: I0202 14:36:32.904588 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.938565 4869 generic.go:334] "Generic (PLEG): container finished" podID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerID="35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4" exitCode=0 Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.938677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerDied","Data":"35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4"} Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.941628 4869 generic.go:334] "Generic (PLEG): container finished" podID="77160080-14bd-4f22-875d-ec53c922a9ca" containerID="cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d" exitCode=0 Feb 02 14:36:33 crc kubenswrapper[4869]: I0202 14:36:33.941677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerDied","Data":"cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d"} Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.368282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.372729 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.435014 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.435100 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 02 14:36:37 crc kubenswrapper[4869]: I0202 14:36:37.903721 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577860 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577884 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577937 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.577967 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578008 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578516 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb"} pod="openshift-console/downloads-7954f5f757-zqdwm" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578641 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" containerID="cri-o://8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb" gracePeriod=2 Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578942 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.578963 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.646863 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 02 14:36:38 crc kubenswrapper[4869]: I0202 14:36:38.646957 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 02 14:36:39 crc kubenswrapper[4869]: I0202 14:36:39.978130 4869 generic.go:334] "Generic (PLEG): container finished" podID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerID="8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb" exitCode=0 Feb 02 14:36:39 crc kubenswrapper[4869]: I0202 14:36:39.978198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerDied","Data":"8deb249bc3b841a84ed7d1bd6703230aa3f896d62885b28411f728d3a8afe2fb"} Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.304871 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.305734 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.305802 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.306544 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:36:45 crc kubenswrapper[4869]: I0202 14:36:45.306623 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b" gracePeriod=600 Feb 02 14:36:46 crc kubenswrapper[4869]: I0202 14:36:46.034277 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b" exitCode=0 Feb 02 14:36:46 crc kubenswrapper[4869]: I0202 14:36:46.034333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b"} Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.435042 4869 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2zsv9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.435549 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.466203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-znb54" Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.579028 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:48 crc kubenswrapper[4869]: I0202 14:36:48.579576 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:49 crc kubenswrapper[4869]: I0202 14:36:49.646769 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wkkx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 14:36:49 crc kubenswrapper[4869]: I0202 14:36:49.646881 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.020879 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.021815 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpswn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h9pgx_openshift-marketplace(35334030-48c7-4d7e-b202-75371c2c74f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.023421 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h9pgx" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.131261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h9pgx" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.349673 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.349852 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cd4wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g6crm_openshift-marketplace(20990512-5147-4de8-95e0-f40e2156f395): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:52 crc kubenswrapper[4869]: E0202 14:36:52.351270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g6crm" podUID="20990512-5147-4de8-95e0-f40e2156f395" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.784541 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g6crm" podUID="20990512-5147-4de8-95e0-f40e2156f395" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.835494 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.841746 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.878095 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879729 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879782 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879789 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879803 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879812 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879825 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerName="collect-profiles" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879834 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerName="collect-profiles" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.879844 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879853 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.879999 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" containerName="controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880013 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" containerName="route-controller-manager" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880025 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff46a125-ff31-42f7-9a16-3eccdd7dd393" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880041 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa6bddf-2294-4b66-816d-1bdaf3cd3c93" containerName="pruner" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880054 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" containerName="collect-profiles" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.880527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.887284 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.919964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920016 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920035 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920092 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920140 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") pod \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\" (UID: \"aad51ba6-f20d-48b1-b456-c7309cc35bbd\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") pod \"77160080-14bd-4f22-875d-ec53c922a9ca\" (UID: \"77160080-14bd-4f22-875d-ec53c922a9ca\") " Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.920564 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.921606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.921638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca" (OuterVolumeSpecName: "client-ca") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.922081 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config" (OuterVolumeSpecName: "config") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.922709 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config" (OuterVolumeSpecName: "config") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.923160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.945528 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.945880 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vlvm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-h4pkg_openshift-marketplace(442e63b3-7f70-4524-b229-aedfb054f395): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.947674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-h4pkg" podUID="442e63b3-7f70-4524-b229-aedfb054f395" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.948761 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.949089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch" (OuterVolumeSpecName: "kube-api-access-mpxch") pod "77160080-14bd-4f22-875d-ec53c922a9ca" (UID: "77160080-14bd-4f22-875d-ec53c922a9ca"). InnerVolumeSpecName "kube-api-access-mpxch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.949626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx" (OuterVolumeSpecName: "kube-api-access-s7sgx") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "kube-api-access-s7sgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: I0202 14:36:53.952892 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aad51ba6-f20d-48b1-b456-c7309cc35bbd" (UID: "aad51ba6-f20d-48b1-b456-c7309cc35bbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.959191 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.959396 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l744,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cm44g_openshift-marketplace(e56fa221-6e79-4c96-be0a-17db4803a127): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:53 crc kubenswrapper[4869]: E0202 14:36:53.960491 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cm44g" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021724 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021738 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpxch\" (UniqueName: \"kubernetes.io/projected/77160080-14bd-4f22-875d-ec53c922a9ca-kube-api-access-mpxch\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021751 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021759 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7sgx\" (UniqueName: \"kubernetes.io/projected/aad51ba6-f20d-48b1-b456-c7309cc35bbd-kube-api-access-s7sgx\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021767 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aad51ba6-f20d-48b1-b456-c7309cc35bbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021775 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021783 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77160080-14bd-4f22-875d-ec53c922a9ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021793 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77160080-14bd-4f22-875d-ec53c922a9ca-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.021801 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aad51ba6-f20d-48b1-b456-c7309cc35bbd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.023028 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.023764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.024127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.026658 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44bcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wrnr2_openshift-marketplace(7bc37994-d436-4a72-93dd-610683ab871f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.027678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.027895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wrnr2" podUID="7bc37994-d436-4a72-93dd-610683ab871f" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.041745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"route-controller-manager-c89fbc794-wrbkk\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.086985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" event={"ID":"77160080-14bd-4f22-875d-ec53c922a9ca","Type":"ContainerDied","Data":"b3271718de5d10823c1d8cb58a92daa70441d4c0775319d6b1e4703935350e20"} Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.087046 4869 scope.go:117] "RemoveContainer" containerID="cd64c60574a3cf0a6a14251847ea949d24f3e42ff5033809e0f3a1441f80527d" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.087123 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.090323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" event={"ID":"aad51ba6-f20d-48b1-b456-c7309cc35bbd","Type":"ContainerDied","Data":"e0e031e07f3777bf084c57bd2ad11cca8d11083d95a8cbf49d91d2ce2ed3c4ce"} Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.090473 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2zsv9" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.093218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wrnr2" podUID="7bc37994-d436-4a72-93dd-610683ab871f" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.093526 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-h4pkg" podUID="442e63b3-7f70-4524-b229-aedfb054f395" Feb 02 14:36:54 crc kubenswrapper[4869]: E0202 14:36:54.106152 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cm44g" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.124517 4869 scope.go:117] "RemoveContainer" containerID="35aa4cbc7f8390c939f51b4852ebf0a07cb58219c1cddd1dbaa0316bfe76b3f4" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.225459 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.228465 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.250998 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2zsv9"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.251433 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.255037 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wkkx2"] Feb 02 14:36:54 crc kubenswrapper[4869]: I0202 14:36:54.536125 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.099163 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c21252d-a76f-437f-8611-f42993137df3" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" exitCode=0 Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.099294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.106821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zqdwm" event={"ID":"f62540d0-1acd-4266-9738-f0fdc72f47d0","Type":"ContainerStarted","Data":"385702c722f118704ef90db2388dc715871a723316fb6a4763da039c9a02db57"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.106882 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.107147 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.107220 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.113056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.114351 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerStarted","Data":"4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.114415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerStarted","Data":"455b2abd7e5482aef3332c14262e762b84b5a7304c0eb824ce7c84e17fb72fbf"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.114888 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.116328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerStarted","Data":"fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.118741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerStarted","Data":"26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c"} Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.171936 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" podStartSLOduration=3.171885137 podStartE2EDuration="3.171885137s" podCreationTimestamp="2026-02-02 14:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:55.168424869 +0000 UTC m=+216.813061639" watchObservedRunningTime="2026-02-02 14:36:55.171885137 +0000 UTC m=+216.816521917" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.278663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.345592 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.346658 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.353241 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.353278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.361457 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.456185 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.456305 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.471580 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77160080-14bd-4f22-875d-ec53c922a9ca" path="/var/lib/kubelet/pods/77160080-14bd-4f22-875d-ec53c922a9ca/volumes" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.472314 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad51ba6-f20d-48b1-b456-c7309cc35bbd" path="/var/lib/kubelet/pods/aad51ba6-f20d-48b1-b456-c7309cc35bbd/volumes" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.557927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.558486 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.558075 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.584218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.663755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:55 crc kubenswrapper[4869]: I0202 14:36:55.894201 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 14:36:55 crc kubenswrapper[4869]: W0202 14:36:55.905104 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9a90fc62_12a8_426e_91bb_d995f9407e25.slice/crio-16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322 WatchSource:0}: Error finding container 16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322: Status 404 returned error can't find the container with id 16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322 Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.124882 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerStarted","Data":"16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322"} Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.128896 4869 generic.go:334] "Generic (PLEG): container finished" podID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerID="fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19" exitCode=0 Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.128967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19"} Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.131904 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerID="26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c" exitCode=0 Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.132132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c"} Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.134022 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.134105 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.663442 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.805828 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.812489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817059 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817285 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817480 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.817807 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.822540 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.822793 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.826103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.830818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879188 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.879354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.980466 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.981570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.982805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.983881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.992626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:56 crc kubenswrapper[4869]: I0202 14:36:56.999686 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"controller-manager-585556997c-k595t\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:57 crc kubenswrapper[4869]: I0202 14:36:57.140661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerStarted","Data":"71dd72aca4bd7f90802f9d58c8a1b3bc8fc0b095c96486bbfbdac6d01e167b38"} Feb 02 14:36:57 crc kubenswrapper[4869]: I0202 14:36:57.176233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:57 crc kubenswrapper[4869]: I0202 14:36:57.390841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:36:57 crc kubenswrapper[4869]: W0202 14:36:57.405618 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd15d0185_0712_4813_8818_f8ff704f3263.slice/crio-81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4 WatchSource:0}: Error finding container 81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4: Status 404 returned error can't find the container with id 81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4 Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.148341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerStarted","Data":"88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.150411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.150525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerStarted","Data":"81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.152338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerStarted","Data":"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.154347 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerID="71dd72aca4bd7f90802f9d58c8a1b3bc8fc0b095c96486bbfbdac6d01e167b38" exitCode=0 Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.154393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerDied","Data":"71dd72aca4bd7f90802f9d58c8a1b3bc8fc0b095c96486bbfbdac6d01e167b38"} Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.165310 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.171068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585556997c-k595t" podStartSLOduration=6.171042602 podStartE2EDuration="6.171042602s" podCreationTimestamp="2026-02-02 14:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:36:58.16899121 +0000 UTC m=+219.813627980" watchObservedRunningTime="2026-02-02 14:36:58.171042602 +0000 UTC m=+219.815679372" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576613 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576678 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-zqdwm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576682 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:58 crc kubenswrapper[4869]: I0202 14:36:58.576743 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zqdwm" podUID="f62540d0-1acd-4266-9738-f0fdc72f47d0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.455458 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.474764 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9xjnr" podStartSLOduration=5.264679953 podStartE2EDuration="44.474743523s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.436155645 +0000 UTC m=+179.080792415" lastFinishedPulling="2026-02-02 14:36:56.646219215 +0000 UTC m=+218.290855985" observedRunningTime="2026-02-02 14:36:58.22827316 +0000 UTC m=+219.872909930" watchObservedRunningTime="2026-02-02 14:36:59.474743523 +0000 UTC m=+221.119380293" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") pod \"9a90fc62-12a8-426e-91bb-d995f9407e25\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") pod \"9a90fc62-12a8-426e-91bb-d995f9407e25\" (UID: \"9a90fc62-12a8-426e-91bb-d995f9407e25\") " Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a90fc62-12a8-426e-91bb-d995f9407e25" (UID: "9a90fc62-12a8-426e-91bb-d995f9407e25"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.529651 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a90fc62-12a8-426e-91bb-d995f9407e25-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.541117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a90fc62-12a8-426e-91bb-d995f9407e25" (UID: "9a90fc62-12a8-426e-91bb-d995f9407e25"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:36:59 crc kubenswrapper[4869]: I0202 14:36:59.630841 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a90fc62-12a8-426e-91bb-d995f9407e25-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.169924 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerStarted","Data":"c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222"} Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.172019 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a90fc62-12a8-426e-91bb-d995f9407e25","Type":"ContainerDied","Data":"16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322"} Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.172087 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e305d8e37d4625bf34005a11aa1f9e0396fd74f5afa0885ebc11e9ec5a8322" Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.172052 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 14:37:00 crc kubenswrapper[4869]: I0202 14:37:00.194456 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9kt6r" podStartSLOduration=3.769501522 podStartE2EDuration="42.194422912s" podCreationTimestamp="2026-02-02 14:36:18 +0000 UTC" firstStartedPulling="2026-02-02 14:36:20.663552379 +0000 UTC m=+182.308189149" lastFinishedPulling="2026-02-02 14:36:59.088473769 +0000 UTC m=+220.733110539" observedRunningTime="2026-02-02 14:37:00.193706614 +0000 UTC m=+221.838343404" watchObservedRunningTime="2026-02-02 14:37:00.194422912 +0000 UTC m=+221.839059682" Feb 02 14:37:01 crc kubenswrapper[4869]: I0202 14:37:01.179586 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerStarted","Data":"4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149"} Feb 02 14:37:01 crc kubenswrapper[4869]: I0202 14:37:01.202258 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7wp9" podStartSLOduration=3.819537168 podStartE2EDuration="43.202230871s" podCreationTimestamp="2026-02-02 14:36:18 +0000 UTC" firstStartedPulling="2026-02-02 14:36:20.66765247 +0000 UTC m=+182.312289240" lastFinishedPulling="2026-02-02 14:37:00.050346153 +0000 UTC m=+221.694982943" observedRunningTime="2026-02-02 14:37:01.200752763 +0000 UTC m=+222.845389533" watchObservedRunningTime="2026-02-02 14:37:01.202230871 +0000 UTC m=+222.846867641" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.141748 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 14:37:02 crc kubenswrapper[4869]: E0202 14:37:02.142981 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerName="pruner" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.143000 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerName="pruner" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.143130 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a90fc62-12a8-426e-91bb-d995f9407e25" containerName="pruner" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.143642 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.150372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.150728 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.154128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.170365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.170522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.170570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.272263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.298175 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"installer-9-crc\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.477444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:02 crc kubenswrapper[4869]: I0202 14:37:02.950566 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 14:37:03 crc kubenswrapper[4869]: I0202 14:37:03.191268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerStarted","Data":"7adfeb67f0661759b89e7e0b4ac36ee5625d863782a8812d5fd336834d3294f2"} Feb 02 14:37:04 crc kubenswrapper[4869]: I0202 14:37:04.198298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerStarted","Data":"4e29b74a75f39484800450916e4d1c5aab402b78c65dc22472418020d76f3456"} Feb 02 14:37:04 crc kubenswrapper[4869]: I0202 14:37:04.219322 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.219298555 podStartE2EDuration="2.219298555s" podCreationTimestamp="2026-02-02 14:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:04.216629378 +0000 UTC m=+225.861266158" watchObservedRunningTime="2026-02-02 14:37:04.219298555 +0000 UTC m=+225.863935315" Feb 02 14:37:05 crc kubenswrapper[4869]: I0202 14:37:05.917245 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:05 crc kubenswrapper[4869]: I0202 14:37:05.917686 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:06 crc kubenswrapper[4869]: I0202 14:37:06.086109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:06 crc kubenswrapper[4869]: I0202 14:37:06.231725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerStarted","Data":"0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60"} Feb 02 14:37:06 crc kubenswrapper[4869]: I0202 14:37:06.274023 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:07 crc kubenswrapper[4869]: I0202 14:37:07.239641 4869 generic.go:334] "Generic (PLEG): container finished" podID="35334030-48c7-4d7e-b202-75371c2c74f0" containerID="0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60" exitCode=0 Feb 02 14:37:07 crc kubenswrapper[4869]: I0202 14:37:07.240232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60"} Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.118200 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.260342 4869 generic.go:334] "Generic (PLEG): container finished" podID="7bc37994-d436-4a72-93dd-610683ab871f" containerID="5adb81683a3033beec8093b130282168a76c6d84454acac94fe5c2d0d6d3406d" exitCode=0 Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.260431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"5adb81683a3033beec8093b130282168a76c6d84454acac94fe5c2d0d6d3406d"} Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.264693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerStarted","Data":"0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6"} Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.264861 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9xjnr" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" containerID="cri-o://f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" gracePeriod=2 Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.307068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h9pgx" podStartSLOduration=2.7362304650000002 podStartE2EDuration="53.307040017s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.410221965 +0000 UTC m=+179.054858735" lastFinishedPulling="2026-02-02 14:37:07.981031517 +0000 UTC m=+229.625668287" observedRunningTime="2026-02-02 14:37:08.305419217 +0000 UTC m=+229.950055987" watchObservedRunningTime="2026-02-02 14:37:08.307040017 +0000 UTC m=+229.951676787" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.596766 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zqdwm" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.687012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.687066 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.737346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.743548 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.801230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") pod \"2c21252d-a76f-437f-8611-f42993137df3\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.801685 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") pod \"2c21252d-a76f-437f-8611-f42993137df3\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.801769 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") pod \"2c21252d-a76f-437f-8611-f42993137df3\" (UID: \"2c21252d-a76f-437f-8611-f42993137df3\") " Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.802216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities" (OuterVolumeSpecName: "utilities") pod "2c21252d-a76f-437f-8611-f42993137df3" (UID: "2c21252d-a76f-437f-8611-f42993137df3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.802452 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.808957 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p" (OuterVolumeSpecName: "kube-api-access-x9j9p") pod "2c21252d-a76f-437f-8611-f42993137df3" (UID: "2c21252d-a76f-437f-8611-f42993137df3"). InnerVolumeSpecName "kube-api-access-x9j9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.858353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c21252d-a76f-437f-8611-f42993137df3" (UID: "2c21252d-a76f-437f-8611-f42993137df3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.904760 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9j9p\" (UniqueName: \"kubernetes.io/projected/2c21252d-a76f-437f-8611-f42993137df3-kube-api-access-x9j9p\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:08 crc kubenswrapper[4869]: I0202 14:37:08.904815 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c21252d-a76f-437f-8611-f42993137df3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.079236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.079374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.123830 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272720 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c21252d-a76f-437f-8611-f42993137df3" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" exitCode=0 Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272775 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82"} Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272851 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9xjnr" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272879 4869 scope.go:117] "RemoveContainer" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.272858 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9xjnr" event={"ID":"2c21252d-a76f-437f-8611-f42993137df3","Type":"ContainerDied","Data":"ab3d419e69ab359ef2eb23e842d3d4f04eb05500497bb827ac7bf3115cbf4af4"} Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.294826 4869 scope.go:117] "RemoveContainer" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.314978 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.318157 4869 scope.go:117] "RemoveContainer" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.318212 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9xjnr"] Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.323180 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.339701 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.346056 4869 scope.go:117] "RemoveContainer" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" Feb 02 14:37:09 crc kubenswrapper[4869]: E0202 14:37:09.346832 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82\": container with ID starting with f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82 not found: ID does not exist" containerID="f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.346893 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82"} err="failed to get container status \"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82\": rpc error: code = NotFound desc = could not find container \"f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82\": container with ID starting with f9c7cbdff06342d01af21290177d22c4f193dd450beb1aef62406a574535eb82 not found: ID does not exist" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.346949 4869 scope.go:117] "RemoveContainer" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" Feb 02 14:37:09 crc kubenswrapper[4869]: E0202 14:37:09.347765 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3\": container with ID starting with 1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3 not found: ID does not exist" containerID="1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.347883 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3"} err="failed to get container status \"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3\": rpc error: code = NotFound desc = could not find container \"1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3\": container with ID starting with 1e24c92e19790fefa2d094fa0fe407019c39d188fb93d0c800d6752bcff6f6a3 not found: ID does not exist" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.348041 4869 scope.go:117] "RemoveContainer" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" Feb 02 14:37:09 crc kubenswrapper[4869]: E0202 14:37:09.348702 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2\": container with ID starting with f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2 not found: ID does not exist" containerID="f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.348732 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2"} err="failed to get container status \"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2\": rpc error: code = NotFound desc = could not find container \"f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2\": container with ID starting with f83a59f0dcba757f7fb9b15c0e2ce27c962363e7211a4a6738719bfd280c83e2 not found: ID does not exist" Feb 02 14:37:09 crc kubenswrapper[4869]: I0202 14:37:09.471859 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c21252d-a76f-437f-8611-f42993137df3" path="/var/lib/kubelet/pods/2c21252d-a76f-437f-8611-f42993137df3/volumes" Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.657404 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.658316 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-585556997c-k595t" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" containerID="cri-o://88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766" gracePeriod=30 Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.693083 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.693457 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" containerID="cri-o://4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035" gracePeriod=30 Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.723207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:37:12 crc kubenswrapper[4869]: I0202 14:37:12.724018 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9kt6r" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" containerID="cri-o://c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222" gracePeriod=2 Feb 02 14:37:13 crc kubenswrapper[4869]: I0202 14:37:13.299686 4869 generic.go:334] "Generic (PLEG): container finished" podID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerID="4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035" exitCode=0 Feb 02 14:37:13 crc kubenswrapper[4869]: I0202 14:37:13.299776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerDied","Data":"4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035"} Feb 02 14:37:13 crc kubenswrapper[4869]: I0202 14:37:13.303959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerStarted","Data":"1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712"} Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.230662 4869 patch_prober.go:28] interesting pod/route-controller-manager-c89fbc794-wrbkk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.230736 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.315484 4869 generic.go:334] "Generic (PLEG): container finished" podID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerID="c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222" exitCode=0 Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.315580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222"} Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.317826 4869 generic.go:334] "Generic (PLEG): container finished" podID="d15d0185-0712-4813-8818-f8ff704f3263" containerID="88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766" exitCode=0 Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.318765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerDied","Data":"88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766"} Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.346390 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wrnr2" podStartSLOduration=3.525181413 podStartE2EDuration="57.346363387s" podCreationTimestamp="2026-02-02 14:36:17 +0000 UTC" firstStartedPulling="2026-02-02 14:36:18.533267613 +0000 UTC m=+180.177904383" lastFinishedPulling="2026-02-02 14:37:12.354449587 +0000 UTC m=+233.999086357" observedRunningTime="2026-02-02 14:37:14.340428978 +0000 UTC m=+235.985065748" watchObservedRunningTime="2026-02-02 14:37:14.346363387 +0000 UTC m=+235.991000157" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.561618 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599533 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.599826 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-content" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599847 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-content" Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.599862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-utilities" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599871 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="extract-utilities" Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.599890 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.599901 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" Feb 02 14:37:14 crc kubenswrapper[4869]: E0202 14:37:14.600270 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.600286 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.600426 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" containerName="route-controller-manager" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.600449 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c21252d-a76f-437f-8611-f42993137df3" containerName="registry-server" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.602140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.619933 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") pod \"d8c59892-6f39-4bd6-91ba-dc718a31d120\" (UID: \"d8c59892-6f39-4bd6-91ba-dc718a31d120\") " Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700781 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.700928 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.701436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca" (OuterVolumeSpecName: "client-ca") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.701793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config" (OuterVolumeSpecName: "config") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.706885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn" (OuterVolumeSpecName: "kube-api-access-6cwmn") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "kube-api-access-6cwmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.714805 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d8c59892-6f39-4bd6-91ba-dc718a31d120" (UID: "d8c59892-6f39-4bd6-91ba-dc718a31d120"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803263 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c59892-6f39-4bd6-91ba-dc718a31d120-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803276 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803286 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c59892-6f39-4bd6-91ba-dc718a31d120-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.803296 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cwmn\" (UniqueName: \"kubernetes.io/projected/d8c59892-6f39-4bd6-91ba-dc718a31d120-kube-api-access-6cwmn\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.804587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.804590 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.812014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.824404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"route-controller-manager-6f57dfbcdd-xwdn9\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:14 crc kubenswrapper[4869]: I0202 14:37:14.934066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.327376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" event={"ID":"d8c59892-6f39-4bd6-91ba-dc718a31d120","Type":"ContainerDied","Data":"455b2abd7e5482aef3332c14262e762b84b5a7304c0eb824ce7c84e17fb72fbf"} Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.327438 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.327470 4869 scope.go:117] "RemoveContainer" containerID="4cabe563b3766c405bab05565f596c0b021d19b96b70eaf89fa9091dbfe9b035" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.369796 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.376215 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c89fbc794-wrbkk"] Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.470765 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c59892-6f39-4bd6-91ba-dc718a31d120" path="/var/lib/kubelet/pods/d8c59892-6f39-4bd6-91ba-dc718a31d120/volumes" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.696732 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.696877 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.756229 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.884149 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:15 crc kubenswrapper[4869]: I0202 14:37:15.887531 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.021861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") pod \"02e119c7-dd08-471f-9800-5bda7b22a6d6\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") pod \"d15d0185-0712-4813-8818-f8ff704f3263\" (UID: \"d15d0185-0712-4813-8818-f8ff704f3263\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") pod \"02e119c7-dd08-471f-9800-5bda7b22a6d6\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.022312 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") pod \"02e119c7-dd08-471f-9800-5bda7b22a6d6\" (UID: \"02e119c7-dd08-471f-9800-5bda7b22a6d6\") " Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.023011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca" (OuterVolumeSpecName: "client-ca") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.024832 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities" (OuterVolumeSpecName: "utilities") pod "02e119c7-dd08-471f-9800-5bda7b22a6d6" (UID: "02e119c7-dd08-471f-9800-5bda7b22a6d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.025563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config" (OuterVolumeSpecName: "config") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.026874 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.028704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj" (OuterVolumeSpecName: "kube-api-access-ftcdj") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "kube-api-access-ftcdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.028885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd" (OuterVolumeSpecName: "kube-api-access-cqjnd") pod "02e119c7-dd08-471f-9800-5bda7b22a6d6" (UID: "02e119c7-dd08-471f-9800-5bda7b22a6d6"). InnerVolumeSpecName "kube-api-access-cqjnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.031311 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d15d0185-0712-4813-8818-f8ff704f3263" (UID: "d15d0185-0712-4813-8818-f8ff704f3263"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123441 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123496 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123507 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftcdj\" (UniqueName: \"kubernetes.io/projected/d15d0185-0712-4813-8818-f8ff704f3263-kube-api-access-ftcdj\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123516 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d15d0185-0712-4813-8818-f8ff704f3263-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123527 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123539 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d15d0185-0712-4813-8818-f8ff704f3263-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.123551 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqjnd\" (UniqueName: \"kubernetes.io/projected/02e119c7-dd08-471f-9800-5bda7b22a6d6-kube-api-access-cqjnd\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.161579 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02e119c7-dd08-471f-9800-5bda7b22a6d6" (UID: "02e119c7-dd08-471f-9800-5bda7b22a6d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.225290 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02e119c7-dd08-471f-9800-5bda7b22a6d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.350117 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9kt6r" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.350111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9kt6r" event={"ID":"02e119c7-dd08-471f-9800-5bda7b22a6d6","Type":"ContainerDied","Data":"9f2809f5a8c7e700679d9b9d7016f7f7d49674e7cd8851d66288e6ccd3443883"} Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.350311 4869 scope.go:117] "RemoveContainer" containerID="c8041b0b4c654aa6c0d50b8e5409c5fe56ff6919d9fe8362c50543eddfe2b222" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.353055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585556997c-k595t" event={"ID":"d15d0185-0712-4813-8818-f8ff704f3263","Type":"ContainerDied","Data":"81ac20da65a87768f6ac41976e49e8ddc1e292471e57623f390a2039b2d754e4"} Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.353061 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585556997c-k595t" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.416793 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.420790 4869 scope.go:117] "RemoveContainer" containerID="fe48020b66e56af4534dd9618f79104d475525a83e0e2a24ba2717bc0e29db19" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.437810 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9kt6r"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.445894 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.448711 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-585556997c-k595t"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.452280 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.486549 4869 scope.go:117] "RemoveContainer" containerID="5761dc2d2fafda3cf6b457c2de25d204c006ac8d85953364b9966521a437f222" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.504727 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.522957 4869 scope.go:117] "RemoveContainer" containerID="88190df39618e1a823af0664590c288f3d8a7241d578d980580389ba24fab766" Feb 02 14:37:16 crc kubenswrapper[4869]: W0202 14:37:16.534561 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86bc8607_01df_4cb4_b6bb_cc2e9d5e9c21.slice/crio-cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7 WatchSource:0}: Error finding container cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7: Status 404 returned error can't find the container with id cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7 Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.816687 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817525 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817545 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817561 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817569 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817584 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-content" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-content" Feb 02 14:37:16 crc kubenswrapper[4869]: E0202 14:37:16.817611 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-utilities" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817622 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="extract-utilities" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817737 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15d0185-0712-4813-8818-f8ff704f3263" containerName="controller-manager" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.817755 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" containerName="registry-server" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.818289 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.820752 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.822989 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.823112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.823499 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.823651 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.865940 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.869611 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.870958 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968214 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:16 crc kubenswrapper[4869]: I0202 14:37:16.968397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069196 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069256 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.069277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.070767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.070816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.071468 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.085977 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.093846 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"controller-manager-5f7449455-6lnbf\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.313292 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.365586 4869 generic.go:334] "Generic (PLEG): container finished" podID="e56fa221-6e79-4c96-be0a-17db4803a127" containerID="1d5262628061708d6b461198d2d084d86b80216bf8b77ec9e9e6c482080d5b5e" exitCode=0 Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.365748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"1d5262628061708d6b461198d2d084d86b80216bf8b77ec9e9e6c482080d5b5e"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.374349 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerStarted","Data":"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.374408 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerStarted","Data":"cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.375597 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.388979 4869 generic.go:334] "Generic (PLEG): container finished" podID="20990512-5147-4de8-95e0-f40e2156f395" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" exitCode=0 Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.389057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.401503 4869 generic.go:334] "Generic (PLEG): container finished" podID="442e63b3-7f70-4524-b229-aedfb054f395" containerID="435266a1fb45df9d425b2515a2f4a59487d90de763976fcfaaabab9e29fcb4cb" exitCode=0 Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.401583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"435266a1fb45df9d425b2515a2f4a59487d90de763976fcfaaabab9e29fcb4cb"} Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.407788 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" podStartSLOduration=5.4077696060000005 podStartE2EDuration="5.407769606s" podCreationTimestamp="2026-02-02 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:17.405690053 +0000 UTC m=+239.050326823" watchObservedRunningTime="2026-02-02 14:37:17.407769606 +0000 UTC m=+239.052406376" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.470995 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e119c7-dd08-471f-9800-5bda7b22a6d6" path="/var/lib/kubelet/pods/02e119c7-dd08-471f-9800-5bda7b22a6d6/volumes" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.472286 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d15d0185-0712-4813-8818-f8ff704f3263" path="/var/lib/kubelet/pods/d15d0185-0712-4813-8818-f8ff704f3263/volumes" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.606039 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.661962 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.662034 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.716994 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:17 crc kubenswrapper[4869]: I0202 14:37:17.761249 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:17 crc kubenswrapper[4869]: W0202 14:37:17.772411 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b312c5_c580_4ea2_83d7_5217f24da91f.slice/crio-98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3 WatchSource:0}: Error finding container 98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3: Status 404 returned error can't find the container with id 98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3 Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.412012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerStarted","Data":"5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.415789 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerStarted","Data":"797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.417471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerStarted","Data":"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.417522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerStarted","Data":"98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3"} Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.418922 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.426073 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.434830 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h4pkg" podStartSLOduration=2.973934785 podStartE2EDuration="1m1.434807257s" podCreationTimestamp="2026-02-02 14:36:17 +0000 UTC" firstStartedPulling="2026-02-02 14:36:19.624107056 +0000 UTC m=+181.268743826" lastFinishedPulling="2026-02-02 14:37:18.084979518 +0000 UTC m=+239.729616298" observedRunningTime="2026-02-02 14:37:18.430307884 +0000 UTC m=+240.074944654" watchObservedRunningTime="2026-02-02 14:37:18.434807257 +0000 UTC m=+240.079444027" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.460807 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cm44g" podStartSLOduration=2.665096812 podStartE2EDuration="1m3.4607766s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.402089094 +0000 UTC m=+179.046725864" lastFinishedPulling="2026-02-02 14:37:18.197768882 +0000 UTC m=+239.842405652" observedRunningTime="2026-02-02 14:37:18.458283647 +0000 UTC m=+240.102920427" watchObservedRunningTime="2026-02-02 14:37:18.4607766 +0000 UTC m=+240.105413370" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.472666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:37:18 crc kubenswrapper[4869]: I0202 14:37:18.488301 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" podStartSLOduration=6.488277341 podStartE2EDuration="6.488277341s" podCreationTimestamp="2026-02-02 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:18.483591413 +0000 UTC m=+240.128228183" watchObservedRunningTime="2026-02-02 14:37:18.488277341 +0000 UTC m=+240.132914111" Feb 02 14:37:20 crc kubenswrapper[4869]: I0202 14:37:20.430812 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerStarted","Data":"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c"} Feb 02 14:37:20 crc kubenswrapper[4869]: I0202 14:37:20.448975 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6crm" podStartSLOduration=3.782908403 podStartE2EDuration="1m5.448954017s" podCreationTimestamp="2026-02-02 14:36:15 +0000 UTC" firstStartedPulling="2026-02-02 14:36:17.424705252 +0000 UTC m=+179.069342022" lastFinishedPulling="2026-02-02 14:37:19.090750866 +0000 UTC m=+240.735387636" observedRunningTime="2026-02-02 14:37:20.447455259 +0000 UTC m=+242.092092029" watchObservedRunningTime="2026-02-02 14:37:20.448954017 +0000 UTC m=+242.093590787" Feb 02 14:37:21 crc kubenswrapper[4869]: I0202 14:37:21.711043 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" containerID="cri-o://4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4" gracePeriod=15 Feb 02 14:37:22 crc kubenswrapper[4869]: I0202 14:37:22.443834 4869 generic.go:334] "Generic (PLEG): container finished" podID="992c2b96-5783-4865-a47d-167caf91e241" containerID="4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4" exitCode=0 Feb 02 14:37:22 crc kubenswrapper[4869]: I0202 14:37:22.443894 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerDied","Data":"4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4"} Feb 02 14:37:22 crc kubenswrapper[4869]: I0202 14:37:22.906717 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071816 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071836 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071872 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071923 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071953 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.071977 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072113 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.072171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") pod \"992c2b96-5783-4865-a47d-167caf91e241\" (UID: \"992c2b96-5783-4865-a47d-167caf91e241\") " Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.073744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.073947 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.076443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.076841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.079140 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.082355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.083125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.083396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.087596 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6" (OuterVolumeSpecName: "kube-api-access-dfqt6") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "kube-api-access-dfqt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.088364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.089980 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.090441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.091355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.099351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "992c2b96-5783-4865-a47d-167caf91e241" (UID: "992c2b96-5783-4865-a47d-167caf91e241"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.174805 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175373 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175395 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175410 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175422 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175433 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/992c2b96-5783-4865-a47d-167caf91e241-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175447 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175461 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175472 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfqt6\" (UniqueName: \"kubernetes.io/projected/992c2b96-5783-4865-a47d-167caf91e241-kube-api-access-dfqt6\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175491 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175501 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175514 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175524 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.175535 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/992c2b96-5783-4865-a47d-167caf91e241-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.451127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" event={"ID":"992c2b96-5783-4865-a47d-167caf91e241","Type":"ContainerDied","Data":"92bb1e4891d47a53670579957e39cb58cbf1f5539b31ad0a5ebf30fb24e6e365"} Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.451193 4869 scope.go:117] "RemoveContainer" containerID="4abb67cf09c57e6c6c99fe8a2c203707c7748b052b9ab7611a5c56ccd1921cd4" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.451210 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-snmjm" Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.486620 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:37:23 crc kubenswrapper[4869]: I0202 14:37:23.494453 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-snmjm"] Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.445889 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.446367 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.470777 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="992c2b96-5783-4865-a47d-167caf91e241" path="/var/lib/kubelet/pods/992c2b96-5783-4865-a47d-167caf91e241/volumes" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.493139 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:25 crc kubenswrapper[4869]: I0202 14:37:25.544699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.092070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.092144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.140210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:26 crc kubenswrapper[4869]: I0202 14:37:26.515665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.077753 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.078164 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.117921 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.520636 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.521278 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cm44g" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" containerID="cri-o://797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b" gracePeriod=2 Feb 02 14:37:28 crc kubenswrapper[4869]: I0202 14:37:28.532038 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.118899 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.514546 4869 generic.go:334] "Generic (PLEG): container finished" podID="e56fa221-6e79-4c96-be0a-17db4803a127" containerID="797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b" exitCode=0 Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.514688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b"} Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.566275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.683129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") pod \"e56fa221-6e79-4c96-be0a-17db4803a127\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.683277 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") pod \"e56fa221-6e79-4c96-be0a-17db4803a127\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.683339 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") pod \"e56fa221-6e79-4c96-be0a-17db4803a127\" (UID: \"e56fa221-6e79-4c96-be0a-17db4803a127\") " Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.687494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities" (OuterVolumeSpecName: "utilities") pod "e56fa221-6e79-4c96-be0a-17db4803a127" (UID: "e56fa221-6e79-4c96-be0a-17db4803a127"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.693420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744" (OuterVolumeSpecName: "kube-api-access-9l744") pod "e56fa221-6e79-4c96-be0a-17db4803a127" (UID: "e56fa221-6e79-4c96-be0a-17db4803a127"). InnerVolumeSpecName "kube-api-access-9l744". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.741013 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e56fa221-6e79-4c96-be0a-17db4803a127" (UID: "e56fa221-6e79-4c96-be0a-17db4803a127"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.785121 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.785176 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l744\" (UniqueName: \"kubernetes.io/projected/e56fa221-6e79-4c96-be0a-17db4803a127-kube-api-access-9l744\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:29 crc kubenswrapper[4869]: I0202 14:37:29.785195 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56fa221-6e79-4c96-be0a-17db4803a127-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm44g" event={"ID":"e56fa221-6e79-4c96-be0a-17db4803a127","Type":"ContainerDied","Data":"3fdc2755e50c40ab06f7338836dcc4d68f5937d9bf9ebd941d8d98f6a64dcd17"} Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526660 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm44g" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526705 4869 scope.go:117] "RemoveContainer" containerID="797da2004ba3f119ddb37365965dd63249daf39b19e259a80345528795c4484b" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.526793 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h4pkg" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" containerID="cri-o://5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f" gracePeriod=2 Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.557064 4869 scope.go:117] "RemoveContainer" containerID="1d5262628061708d6b461198d2d084d86b80216bf8b77ec9e9e6c482080d5b5e" Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.572450 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.579240 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cm44g"] Feb 02 14:37:30 crc kubenswrapper[4869]: I0202 14:37:30.588986 4869 scope.go:117] "RemoveContainer" containerID="b2450dd93a7c78de896bbf627e97911c1993d1380dd59859505aa8d294fc3f44" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.469948 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" path="/var/lib/kubelet/pods/e56fa221-6e79-4c96-be0a-17db4803a127/volumes" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.537486 4869 generic.go:334] "Generic (PLEG): container finished" podID="442e63b3-7f70-4524-b229-aedfb054f395" containerID="5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f" exitCode=0 Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.537558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f"} Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.620140 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") pod \"442e63b3-7f70-4524-b229-aedfb054f395\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811532 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") pod \"442e63b3-7f70-4524-b229-aedfb054f395\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") pod \"442e63b3-7f70-4524-b229-aedfb054f395\" (UID: \"442e63b3-7f70-4524-b229-aedfb054f395\") " Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities" (OuterVolumeSpecName: "utilities") pod "442e63b3-7f70-4524-b229-aedfb054f395" (UID: "442e63b3-7f70-4524-b229-aedfb054f395"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.811999 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.817385 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5" (OuterVolumeSpecName: "kube-api-access-vlvm5") pod "442e63b3-7f70-4524-b229-aedfb054f395" (UID: "442e63b3-7f70-4524-b229-aedfb054f395"). InnerVolumeSpecName "kube-api-access-vlvm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.837819 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "442e63b3-7f70-4524-b229-aedfb054f395" (UID: "442e63b3-7f70-4524-b229-aedfb054f395"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.913849 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442e63b3-7f70-4524-b229-aedfb054f395-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:31 crc kubenswrapper[4869]: I0202 14:37:31.913897 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvm5\" (UniqueName: \"kubernetes.io/projected/442e63b3-7f70-4524-b229-aedfb054f395-kube-api-access-vlvm5\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.546647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4pkg" event={"ID":"442e63b3-7f70-4524-b229-aedfb054f395","Type":"ContainerDied","Data":"1a0c74611f17f263977a1b27acf9874f05439e600bd46e6c1d9bd58db5ca5ce2"} Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.546689 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4pkg" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.546717 4869 scope.go:117] "RemoveContainer" containerID="5947ac8f14c73d2187928be98d6353455f8352629e18fc580531aab5e660d42f" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.578363 4869 scope.go:117] "RemoveContainer" containerID="435266a1fb45df9d425b2515a2f4a59487d90de763976fcfaaabab9e29fcb4cb" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.595756 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.598846 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4pkg"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.615433 4869 scope.go:117] "RemoveContainer" containerID="9fde05ff8b3ab7b33bf7fd64de1786d6d6c5b221f2074b9b8d881ce96c0861b1" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.668207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.668507 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" containerID="cri-o://b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" gracePeriod=30 Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.765658 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.766013 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" containerID="cri-o://67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" gracePeriod=30 Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827586 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-69btm"] Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827870 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827884 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827895 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827901 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827932 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827938 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827952 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827966 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827972 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="extract-content" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827981 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.827987 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="extract-utilities" Feb 02 14:37:32 crc kubenswrapper[4869]: E0202 14:37:32.827997 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828003 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828092 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="442e63b3-7f70-4524-b229-aedfb054f395" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828105 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="992c2b96-5783-4865-a47d-167caf91e241" containerName="oauth-openshift" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828121 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56fa221-6e79-4c96-be0a-17db4803a127" containerName="registry-server" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.828592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.833535 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.833621 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836143 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836300 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836351 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836475 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836533 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836585 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836633 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.836819 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.837155 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.843264 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.847073 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-69btm"] Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.849853 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.859353 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f717d6c0-e841-450a-90b8-e651ed89f315-audit-dir\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929235 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929324 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-audit-policies\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/f717d6c0-e841-450a-90b8-e651ed89f315-kube-api-access-9gjzm\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:32 crc kubenswrapper[4869]: I0202 14:37:32.929870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.030715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-audit-policies\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031410 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/f717d6c0-e841-450a-90b8-e651ed89f315-kube-api-access-9gjzm\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031442 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f717d6c0-e841-450a-90b8-e651ed89f315-audit-dir\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031495 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031531 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.031555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.032897 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-audit-policies\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.034207 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.034823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.035049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f717d6c0-e841-450a-90b8-e651ed89f315-audit-dir\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.037660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.038944 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-error\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.039038 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.040805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.040948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-login\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.041385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.042094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-session\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.043853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.054066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f717d6c0-e841-450a-90b8-e651ed89f315-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.057668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gjzm\" (UniqueName: \"kubernetes.io/projected/f717d6c0-e841-450a-90b8-e651ed89f315-kube-api-access-9gjzm\") pod \"oauth-openshift-6b5f774455-69btm\" (UID: \"f717d6c0-e841-450a-90b8-e651ed89f315\") " pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.208145 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.215290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.300249 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334243 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334377 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.334481 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") pod \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\" (UID: \"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.335441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca" (OuterVolumeSpecName: "client-ca") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.335767 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config" (OuterVolumeSpecName: "config") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.338208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.338248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679" (OuterVolumeSpecName: "kube-api-access-95679") pod "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" (UID: "86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21"). InnerVolumeSpecName "kube-api-access-95679". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.435862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436137 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436172 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436208 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") pod \"f0b312c5-c580-4ea2-83d7-5217f24da91f\" (UID: \"f0b312c5-c580-4ea2-83d7-5217f24da91f\") " Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436592 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95679\" (UniqueName: \"kubernetes.io/projected/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-kube-api-access-95679\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436613 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436626 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.436638 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.437621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.437646 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.437864 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config" (OuterVolumeSpecName: "config") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.441046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.441095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h" (OuterVolumeSpecName: "kube-api-access-7jc2h") pod "f0b312c5-c580-4ea2-83d7-5217f24da91f" (UID: "f0b312c5-c580-4ea2-83d7-5217f24da91f"). InnerVolumeSpecName "kube-api-access-7jc2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.477703 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442e63b3-7f70-4524-b229-aedfb054f395" path="/var/lib/kubelet/pods/442e63b3-7f70-4524-b229-aedfb054f395/volumes" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538856 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jc2h\" (UniqueName: \"kubernetes.io/projected/f0b312c5-c580-4ea2-83d7-5217f24da91f-kube-api-access-7jc2h\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538948 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538967 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538978 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0b312c5-c580-4ea2-83d7-5217f24da91f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.538991 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b312c5-c580-4ea2-83d7-5217f24da91f-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562719 4869 generic.go:334] "Generic (PLEG): container finished" podID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" exitCode=0 Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerDied","Data":"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" event={"ID":"86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21","Type":"ContainerDied","Data":"cf236560b7d6646a54e7f59311d83effbae6d4d5360820e79eb0df220d0e6ee7"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562840 4869 scope.go:117] "RemoveContainer" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.562959 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.566819 4869 generic.go:334] "Generic (PLEG): container finished" podID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" exitCode=0 Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.566894 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.566889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerDied","Data":"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.567224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7449455-6lnbf" event={"ID":"f0b312c5-c580-4ea2-83d7-5217f24da91f","Type":"ContainerDied","Data":"98adde0a702d52c9a0d22e054e46c1d5239c4279f9c7333137df738bae8d3aa3"} Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.584075 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.596555 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f57dfbcdd-xwdn9"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.602888 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.606943 4869 scope.go:117] "RemoveContainer" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.607582 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae\": container with ID starting with 67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae not found: ID does not exist" containerID="67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.607656 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae"} err="failed to get container status \"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae\": rpc error: code = NotFound desc = could not find container \"67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae\": container with ID starting with 67871ade9db2b44905b83a1b580db9eedc751372954080f548dfd95e0ea3aaae not found: ID does not exist" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.607689 4869 scope.go:117] "RemoveContainer" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.608321 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f7449455-6lnbf"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.639234 4869 scope.go:117] "RemoveContainer" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.640019 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d\": container with ID starting with b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d not found: ID does not exist" containerID="b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.640055 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d"} err="failed to get container status \"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d\": rpc error: code = NotFound desc = could not find container \"b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d\": container with ID starting with b0d5a84fe34934d3c68d0faef4b5b4fad2221940400c49a0dbe112ac968e488d not found: ID does not exist" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.671809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b5f774455-69btm"] Feb 02 14:37:33 crc kubenswrapper[4869]: W0202 14:37:33.676413 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf717d6c0_e841_450a_90b8_e651ed89f315.slice/crio-4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4 WatchSource:0}: Error finding container 4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4: Status 404 returned error can't find the container with id 4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4 Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830040 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv"] Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.830474 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830521 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: E0202 14:37:33.830541 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830549 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.830718 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" containerName="controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.831425 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" containerName="route-controller-manager" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.836442 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.836444 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.838103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.838896 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.838923 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841155 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841519 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841584 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.841859 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842055 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842202 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842276 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.842293 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.844008 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.845681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b7910bb-92fa-4254-9635-b376bd2e3b5b-serving-cert\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.845729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zv56\" (UniqueName: \"kubernetes.io/projected/9b7910bb-92fa-4254-9635-b376bd2e3b5b-kube-api-access-7zv56\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.848845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-config\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.849025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-proxy-ca-bundles\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.849147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6265c823-67e0-40d0-9a85-d57db97e2513-serving-cert\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.849418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xrb\" (UniqueName: \"kubernetes.io/projected/6265c823-67e0-40d0-9a85-d57db97e2513-kube-api-access-t2xrb\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-client-ca\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-config\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.850844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-client-ca\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.867040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.889212 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.893871 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv"] Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.953454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-client-ca\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.953837 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b7910bb-92fa-4254-9635-b376bd2e3b5b-serving-cert\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.953971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zv56\" (UniqueName: \"kubernetes.io/projected/9b7910bb-92fa-4254-9635-b376bd2e3b5b-kube-api-access-7zv56\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-config\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-proxy-ca-bundles\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6265c823-67e0-40d0-9a85-d57db97e2513-serving-cert\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954569 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xrb\" (UniqueName: \"kubernetes.io/projected/6265c823-67e0-40d0-9a85-d57db97e2513-kube-api-access-t2xrb\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-client-ca\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954797 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-config\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.954936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-client-ca\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.955817 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-config\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.956155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6265c823-67e0-40d0-9a85-d57db97e2513-client-ca\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.956258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-proxy-ca-bundles\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.956996 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b7910bb-92fa-4254-9635-b376bd2e3b5b-config\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.966213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b7910bb-92fa-4254-9635-b376bd2e3b5b-serving-cert\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.973710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6265c823-67e0-40d0-9a85-d57db97e2513-serving-cert\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.978747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xrb\" (UniqueName: \"kubernetes.io/projected/6265c823-67e0-40d0-9a85-d57db97e2513-kube-api-access-t2xrb\") pod \"route-controller-manager-75d8bc457c-vh8fn\" (UID: \"6265c823-67e0-40d0-9a85-d57db97e2513\") " pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:33 crc kubenswrapper[4869]: I0202 14:37:33.979930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zv56\" (UniqueName: \"kubernetes.io/projected/9b7910bb-92fa-4254-9635-b376bd2e3b5b-kube-api-access-7zv56\") pod \"controller-manager-5db6dd47c5-gnrlv\" (UID: \"9b7910bb-92fa-4254-9635-b376bd2e3b5b\") " pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.171171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.196831 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:34 crc kubenswrapper[4869]: W0202 14:37:34.505315 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6265c823_67e0_40d0_9a85_d57db97e2513.slice/crio-ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2 WatchSource:0}: Error finding container ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2: Status 404 returned error can't find the container with id ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2 Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.506217 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn"] Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.580684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" event={"ID":"f717d6c0-e841-450a-90b8-e651ed89f315","Type":"ContainerStarted","Data":"004aa9e20d90c52c532959af386df200cddc9e51d9026630027395f5501fbe58"} Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.580735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" event={"ID":"f717d6c0-e841-450a-90b8-e651ed89f315","Type":"ContainerStarted","Data":"4f9a06206efe9ff0a29dfaec184457a51184170a2123e2d63f42b1b62bbd36c4"} Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.582160 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.587735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" event={"ID":"6265c823-67e0-40d0-9a85-d57db97e2513","Type":"ContainerStarted","Data":"ea29608157630d501a847268504a1861fb0a895ca48f563074d8d69cd77382c2"} Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.587870 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.643778 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b5f774455-69btm" podStartSLOduration=38.643754039 podStartE2EDuration="38.643754039s" podCreationTimestamp="2026-02-02 14:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:34.618340301 +0000 UTC m=+256.262977071" watchObservedRunningTime="2026-02-02 14:37:34.643754039 +0000 UTC m=+256.288390809" Feb 02 14:37:34 crc kubenswrapper[4869]: I0202 14:37:34.654649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv"] Feb 02 14:37:34 crc kubenswrapper[4869]: W0202 14:37:34.665336 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7910bb_92fa_4254_9635_b376bd2e3b5b.slice/crio-ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b WatchSource:0}: Error finding container ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b: Status 404 returned error can't find the container with id ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.473464 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21" path="/var/lib/kubelet/pods/86bc8607-01df-4cb4-b6bb-cc2e9d5e9c21/volumes" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.474843 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b312c5-c580-4ea2-83d7-5217f24da91f" path="/var/lib/kubelet/pods/f0b312c5-c580-4ea2-83d7-5217f24da91f/volumes" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.594850 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" event={"ID":"6265c823-67e0-40d0-9a85-d57db97e2513","Type":"ContainerStarted","Data":"256411f04db530b62c380608d97946b9b623805f96c4af44692a56c21b7ceb7d"} Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.596730 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.598789 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" event={"ID":"9b7910bb-92fa-4254-9635-b376bd2e3b5b","Type":"ContainerStarted","Data":"904a9654994a6deea97a335762a1e162586410d8a11a6bee3309d47260b5ad34"} Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.598821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" event={"ID":"9b7910bb-92fa-4254-9635-b376bd2e3b5b","Type":"ContainerStarted","Data":"ffdd039c32f65f941efa3b8430c2f46543aaf858ca17099ec14e50cce6e7679b"} Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.599192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.604473 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.605299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.643228 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75d8bc457c-vh8fn" podStartSLOduration=3.643205648 podStartE2EDuration="3.643205648s" podCreationTimestamp="2026-02-02 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:35.619783439 +0000 UTC m=+257.264420219" watchObservedRunningTime="2026-02-02 14:37:35.643205648 +0000 UTC m=+257.287842418" Feb 02 14:37:35 crc kubenswrapper[4869]: I0202 14:37:35.644729 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5db6dd47c5-gnrlv" podStartSLOduration=3.644723656 podStartE2EDuration="3.644723656s" podCreationTimestamp="2026-02-02 14:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:37:35.64252833 +0000 UTC m=+257.287165100" watchObservedRunningTime="2026-02-02 14:37:35.644723656 +0000 UTC m=+257.289360426" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.190162 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.191692 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193002 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193314 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193361 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193444 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193518 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.193585 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" gracePeriod=15 Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.194994 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195460 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195480 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195509 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195516 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195525 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195532 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195548 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195555 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195567 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195574 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.195587 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.195594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197040 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197066 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197079 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197087 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197100 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197108 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197120 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 14:37:41 crc kubenswrapper[4869]: E0202 14:37:41.197881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.197971 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.242308 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288504 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288537 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288565 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.288655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390841 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.390929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391156 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391212 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391261 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.391508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:41 crc kubenswrapper[4869]: I0202 14:37:41.543399 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:37:42 crc kubenswrapper[4869]: W0202 14:37:42.292399 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb WatchSource:0}: Error finding container e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb: Status 404 returned error can't find the container with id e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb Feb 02 14:37:42 crc kubenswrapper[4869]: E0202 14:37:42.296533 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.82:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189074c97d476a90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,LastTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.644143 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.646468 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647158 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647188 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647195 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647203 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" exitCode=2 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.647266 4869 scope.go:117] "RemoveContainer" containerID="18ac055a161ed9bb11563707066aed512a5e3535f805b7d06704ae81cc73664e" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.649455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2"} Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.649495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e55d596794aaa8f19a4f5d9b34185a347aecefa0e6807396866bea39d6f03efb"} Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.651121 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.652272 4869 generic.go:334] "Generic (PLEG): container finished" podID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerID="4e29b74a75f39484800450916e4d1c5aab402b78c65dc22472418020d76f3456" exitCode=0 Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.652314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerDied","Data":"4e29b74a75f39484800450916e4d1c5aab402b78c65dc22472418020d76f3456"} Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.653298 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:42 crc kubenswrapper[4869]: I0202 14:37:42.653545 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:43 crc kubenswrapper[4869]: I0202 14:37:43.663404 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.045709 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.046761 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.047168 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.135940 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") pod \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.135990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") pod \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") pod \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\" (UID: \"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" (UID: "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock" (OuterVolumeSpecName: "var-lock") pod "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" (UID: "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136505 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.136520 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.145964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" (UID: "9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.237867 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.585161 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.586879 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.587590 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.588120 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.588425 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.642882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643045 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643136 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643170 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643278 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643502 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643519 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.643528 4869 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.676130 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.677146 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" exitCode=0 Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.677223 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.677232 4869 scope.go:117] "RemoveContainer" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.679106 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a","Type":"ContainerDied","Data":"7adfeb67f0661759b89e7e0b4ac36ee5625d863782a8812d5fd336834d3294f2"} Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.679139 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adfeb67f0661759b89e7e0b4ac36ee5625d863782a8812d5fd336834d3294f2" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.679170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.699550 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.700164 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.700513 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.700842 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.701054 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.701210 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.723131 4869 scope.go:117] "RemoveContainer" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.743617 4869 scope.go:117] "RemoveContainer" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.764130 4869 scope.go:117] "RemoveContainer" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.789283 4869 scope.go:117] "RemoveContainer" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.806699 4869 scope.go:117] "RemoveContainer" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.826310 4869 scope.go:117] "RemoveContainer" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.826846 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\": container with ID starting with 1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5 not found: ID does not exist" containerID="1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.826899 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5"} err="failed to get container status \"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\": rpc error: code = NotFound desc = could not find container \"1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5\": container with ID starting with 1468d7f6095941e17e9758ef93134d5e341a9d84d3a72c6aad49130d02bb29d5 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.826978 4869 scope.go:117] "RemoveContainer" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.828719 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\": container with ID starting with bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213 not found: ID does not exist" containerID="bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.828756 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213"} err="failed to get container status \"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\": rpc error: code = NotFound desc = could not find container \"bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213\": container with ID starting with bbbe60010f51b3055160a3abeb5cf9a752f05b8d8ef4017cfae304f71adac213 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.828789 4869 scope.go:117] "RemoveContainer" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.829182 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\": container with ID starting with f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f not found: ID does not exist" containerID="f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829208 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f"} err="failed to get container status \"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\": rpc error: code = NotFound desc = could not find container \"f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f\": container with ID starting with f15f563b7efa3fb52efcf4c02c0ba06a356a7ff0fecb82e4803fcf639b9c352f not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829225 4869 scope.go:117] "RemoveContainer" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.829508 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\": container with ID starting with 096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649 not found: ID does not exist" containerID="096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829526 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649"} err="failed to get container status \"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\": rpc error: code = NotFound desc = could not find container \"096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649\": container with ID starting with 096fb60316dea91f5a1f2f9bb83c245d6c33e7af96734d04ec001e016c201649 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.829540 4869 scope.go:117] "RemoveContainer" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.830013 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\": container with ID starting with 6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5 not found: ID does not exist" containerID="6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.830044 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5"} err="failed to get container status \"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\": rpc error: code = NotFound desc = could not find container \"6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5\": container with ID starting with 6ee7e924494925acb6a1a530f5d68f076ecaf17c25b984f805727c89e7c6cba5 not found: ID does not exist" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.830061 4869 scope.go:117] "RemoveContainer" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" Feb 02 14:37:44 crc kubenswrapper[4869]: E0202 14:37:44.830358 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\": container with ID starting with 1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37 not found: ID does not exist" containerID="1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37" Feb 02 14:37:44 crc kubenswrapper[4869]: I0202 14:37:44.830379 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37"} err="failed to get container status \"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\": rpc error: code = NotFound desc = could not find container \"1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37\": container with ID starting with 1ad52fbf4a9be38c3c9e5f18e5af2fc7bc2f404c010d1176f5ddfae2319dde37 not found: ID does not exist" Feb 02 14:37:45 crc kubenswrapper[4869]: I0202 14:37:45.468936 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.491635 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.492191 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.492800 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.493224 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.493612 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:47 crc kubenswrapper[4869]: I0202 14:37:47.493649 4869 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.493893 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="200ms" Feb 02 14:37:47 crc kubenswrapper[4869]: E0202 14:37:47.695404 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="400ms" Feb 02 14:37:48 crc kubenswrapper[4869]: E0202 14:37:48.096443 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="800ms" Feb 02 14:37:48 crc kubenswrapper[4869]: E0202 14:37:48.263067 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.82:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189074c97d476a90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,LastTimestamp:2026-02-02 14:37:42.295685776 +0000 UTC m=+263.940322546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 14:37:48 crc kubenswrapper[4869]: E0202 14:37:48.897986 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="1.6s" Feb 02 14:37:49 crc kubenswrapper[4869]: I0202 14:37:49.465454 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:49 crc kubenswrapper[4869]: I0202 14:37:49.466102 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:50 crc kubenswrapper[4869]: E0202 14:37:50.499452 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="3.2s" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.558412 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.558996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.559072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:37:52 crc kubenswrapper[4869]: I0202 14:37:52.559202 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:37:52 crc kubenswrapper[4869]: W0202 14:37:52.559687 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:52 crc kubenswrapper[4869]: E0202 14:37:52.559785 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:52 crc kubenswrapper[4869]: W0202 14:37:52.559986 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:52 crc kubenswrapper[4869]: E0202 14:37:52.560129 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:52 crc kubenswrapper[4869]: W0202 14:37:52.559687 4869 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:52 crc kubenswrapper[4869]: E0202 14:37:52.560231 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559776 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559853 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559863 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559998 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:55.559964755 +0000 UTC m=+397.204601525 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.560197 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:55.56017489 +0000 UTC m=+397.204811660 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.559996 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:53 crc kubenswrapper[4869]: W0202 14:37:53.561050 4869 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.561139 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:53 crc kubenswrapper[4869]: E0202 14:37:53.700238 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.82:6443: connect: connection refused" interval="6.4s" Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561138 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561275 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561186 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561395 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561396 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:56.56136957 +0000 UTC m=+398.206006340 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.561517 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 14:39:56.561486642 +0000 UTC m=+398.206123412 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 02 14:37:54 crc kubenswrapper[4869]: W0202 14:37:54.619435 4869 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.619555 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.779785 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.779847 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53" exitCode=1 Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.779886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53"} Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.781350 4869 scope.go:117] "RemoveContainer" containerID="24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.781974 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.782508 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:54 crc kubenswrapper[4869]: I0202 14:37:54.782748 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:54 crc kubenswrapper[4869]: W0202 14:37:54.976256 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:54 crc kubenswrapper[4869]: E0202 14:37:54.976704 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27217\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:55 crc kubenswrapper[4869]: W0202 14:37:55.241231 4869 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:55 crc kubenswrapper[4869]: E0202 14:37:55.241345 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.794042 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.794148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48"} Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.795446 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.796296 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:55 crc kubenswrapper[4869]: I0202 14:37:55.796639 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:55 crc kubenswrapper[4869]: W0202 14:37:55.804722 4869 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215": dial tcp 38.129.56.82:6443: connect: connection refused Feb 02 14:37:55 crc kubenswrapper[4869]: E0202 14:37:55.804813 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27215\": dial tcp 38.129.56.82:6443: connect: connection refused" logger="UnhandledError" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.128120 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.180465 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.180857 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.180984 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.462046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.464930 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.465572 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.465895 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.479655 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.479726 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:56 crc kubenswrapper[4869]: E0202 14:37:56.480380 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.481453 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:56 crc kubenswrapper[4869]: I0202 14:37:56.803198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6761a4a5165ae6cb7a772c44b1665b6b7ebe7de99f1094f5adde7248288ac27f"} Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813048 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ed0d9d90c2e5bb55df0d6a404530efce84c940be6299ebe61ba479a34e5bf850" exitCode=0 Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ed0d9d90c2e5bb55df0d6a404530efce84c940be6299ebe61ba479a34e5bf850"} Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813693 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.813716 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:57 crc kubenswrapper[4869]: E0202 14:37:57.814344 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.814349 4869 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.814953 4869 status_manager.go:851] "Failed to get status for pod" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:57 crc kubenswrapper[4869]: I0202 14:37:57.815294 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.82:6443: connect: connection refused" Feb 02 14:37:58 crc kubenswrapper[4869]: I0202 14:37:58.821721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cb5c56a5e047124905812b7a14b8a34862cb45bda2a033dcd929ee28793d1f98"} Feb 02 14:37:58 crc kubenswrapper[4869]: I0202 14:37:58.822128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79ea061f5625451f1692831d2c2774a2c11d2f8e0feb297db2721ec6e1a18cb1"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.830762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3ed615df896343e6330ac413783bf3ec5e1f88d8297b8815bd0be595dc066dc4"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0ef39b07741ca7c20804fdc8fe96e0862226159e2aaebe3d16ce796c258f799c"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"244b6f19fb1e568ae7381f1ff6c9edef2df1bb485b3508f8ae05d93afe8ad476"} Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831180 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:37:59 crc kubenswrapper[4869]: I0202 14:37:59.831363 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.317116 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.482291 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.482462 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:01 crc kubenswrapper[4869]: I0202 14:38:01.490763 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.847723 4869 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.848947 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.868803 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.869154 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.873063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.875514 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c96bdd8a-fdad-42aa-baba-291b9cd0c8d3" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.924449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 14:38:04 crc kubenswrapper[4869]: I0202 14:38:04.976948 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 14:38:05 crc kubenswrapper[4869]: I0202 14:38:05.875883 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:05 crc kubenswrapper[4869]: I0202 14:38:05.875939 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="49510a01-65b6-4a4a-a398-11a00b05a68d" Feb 02 14:38:06 crc kubenswrapper[4869]: I0202 14:38:06.181600 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 14:38:06 crc kubenswrapper[4869]: I0202 14:38:06.181684 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 14:38:08 crc kubenswrapper[4869]: E0202 14:38:08.493322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 14:38:09 crc kubenswrapper[4869]: E0202 14:38:09.482873 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 14:38:09 crc kubenswrapper[4869]: I0202 14:38:09.486356 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c96bdd8a-fdad-42aa-baba-291b9cd0c8d3" Feb 02 14:38:09 crc kubenswrapper[4869]: E0202 14:38:09.493794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 14:38:14 crc kubenswrapper[4869]: I0202 14:38:14.741104 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 14:38:14 crc kubenswrapper[4869]: I0202 14:38:14.802332 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 14:38:14 crc kubenswrapper[4869]: I0202 14:38:14.877719 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.004017 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.326348 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.476838 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.643613 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.799412 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.852470 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.890077 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 14:38:15 crc kubenswrapper[4869]: I0202 14:38:15.954446 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.112188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.156963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.181168 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.181546 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.181770 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.183755 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.183936 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48" gracePeriod=30 Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.370171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.421410 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.448833 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.674337 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.711674 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.837296 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 14:38:16 crc kubenswrapper[4869]: I0202 14:38:16.976630 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.142640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.166250 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.253241 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.451103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.523501 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.620217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.695289 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.720329 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.796383 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.880956 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.919640 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 14:38:17 crc kubenswrapper[4869]: I0202 14:38:17.992287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.013182 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.082528 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.088672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.097793 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.116147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.126860 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.142943 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.285316 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.334056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.342964 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.369764 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.453296 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.468938 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.498659 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.585447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.637401 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.641951 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.836346 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.912576 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 14:38:18 crc kubenswrapper[4869]: I0202 14:38:18.915836 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.071103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.139399 4869 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.209124 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.374610 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.377976 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.377952381 podStartE2EDuration="38.377952381s" podCreationTimestamp="2026-02-02 14:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:38:04.595950288 +0000 UTC m=+286.240587058" watchObservedRunningTime="2026-02-02 14:38:19.377952381 +0000 UTC m=+301.022589151" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.380179 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.380252 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.386531 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.403930 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.403892467 podStartE2EDuration="15.403892467s" podCreationTimestamp="2026-02-02 14:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:38:19.401196901 +0000 UTC m=+301.045833691" watchObservedRunningTime="2026-02-02 14:38:19.403892467 +0000 UTC m=+301.048529237" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.410973 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.412035 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.414937 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.590071 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.602192 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.622031 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.688190 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.724139 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.798668 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.839601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.888646 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.899800 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 14:38:19 crc kubenswrapper[4869]: I0202 14:38:19.982267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.031643 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.067728 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.104936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.252672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.314232 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.386165 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.455800 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.461798 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.567635 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.592323 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.648342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.690090 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.702275 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.722788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.781972 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.788966 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.846025 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.848800 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 14:38:20 crc kubenswrapper[4869]: I0202 14:38:20.882699 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.022472 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.031677 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.048244 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.101171 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.102108 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.258706 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.307060 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.335641 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.410204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.430789 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.461857 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.480109 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.524170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.537649 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.561786 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.599980 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.663365 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.787246 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.787290 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.813391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.914030 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.945587 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 14:38:21 crc kubenswrapper[4869]: I0202 14:38:21.967054 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.006231 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.042455 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.120513 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.304042 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.307517 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.323761 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.357116 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.388638 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.448327 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.480401 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.481075 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.610457 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.611705 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.669118 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.731995 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.778976 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.793842 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.805642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.806379 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.839846 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.865192 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.894893 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.919262 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 14:38:22 crc kubenswrapper[4869]: I0202 14:38:22.926997 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.012529 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.018432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.095755 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.286601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.335509 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.353254 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.461988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.485261 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.687089 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.881753 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 14:38:23 crc kubenswrapper[4869]: I0202 14:38:23.954396 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.016572 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.196835 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.240487 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.247679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.270341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.470958 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.507000 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.607491 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.649816 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.702131 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.714317 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.818674 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.829173 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.887159 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.902132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 14:38:24 crc kubenswrapper[4869]: I0202 14:38:24.998373 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.029758 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.038639 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.104951 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.114202 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.153834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.370899 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.379621 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.403221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.405269 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.514716 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.638163 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.648075 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.730301 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.804003 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.821282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.841783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.849853 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.888575 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 14:38:25 crc kubenswrapper[4869]: I0202 14:38:25.960327 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.024110 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.040322 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.052956 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.076043 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.197085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.224793 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.228208 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.461672 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.543039 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.573848 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.615253 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.642670 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 14:38:26 crc kubenswrapper[4869]: I0202 14:38:26.656245 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.036701 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.247447 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.247805 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" gracePeriod=5 Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.272178 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.401502 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.442330 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.649696 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.680147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.900899 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.942061 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.964520 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 14:38:27 crc kubenswrapper[4869]: I0202 14:38:27.977825 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.115806 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.291463 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.362087 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.411093 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.465585 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.469020 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.678605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.684050 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.702206 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.830982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.841461 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.847497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.868293 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.876485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.923221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 14:38:28 crc kubenswrapper[4869]: I0202 14:38:28.953968 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.085707 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.150078 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.161220 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.161645 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.179018 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.214204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.385404 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.399480 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.432234 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.538543 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.539330 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6crm" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" containerID="cri-o://6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.545226 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.551197 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h9pgx" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" containerID="cri-o://0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.560734 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.561059 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" containerID="cri-o://86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.564654 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.565216 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wrnr2" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" containerID="cri-o://1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.580190 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.580591 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k7wp9" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" containerID="cri-o://4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149" gracePeriod=30 Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.740480 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.764794 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.940871 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 14:38:29 crc kubenswrapper[4869]: I0202 14:38:29.952513 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.036614 4869 generic.go:334] "Generic (PLEG): container finished" podID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerID="86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.036746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerDied","Data":"86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.041901 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.042444 4869 generic.go:334] "Generic (PLEG): container finished" podID="7bc37994-d436-4a72-93dd-610683ab871f" containerID="1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.042529 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049200 4869 generic.go:334] "Generic (PLEG): container finished" podID="20990512-5147-4de8-95e0-f40e2156f395" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049258 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"20990512-5147-4de8-95e0-f40e2156f395","Type":"ContainerDied","Data":"63b62c3c310182414e285b775897296c2f662f58b08903ff210519308baba3a6"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.049350 4869 scope.go:117] "RemoveContainer" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.053295 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerID="4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.053411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.057864 4869 generic.go:334] "Generic (PLEG): container finished" podID="35334030-48c7-4d7e-b202-75371c2c74f0" containerID="0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6" exitCode=0 Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.057952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6"} Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.088323 4869 scope.go:117] "RemoveContainer" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.100669 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.117413 4869 scope.go:117] "RemoveContainer" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.134195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") pod \"20990512-5147-4de8-95e0-f40e2156f395\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.134308 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") pod \"20990512-5147-4de8-95e0-f40e2156f395\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.134338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") pod \"20990512-5147-4de8-95e0-f40e2156f395\" (UID: \"20990512-5147-4de8-95e0-f40e2156f395\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.135843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities" (OuterVolumeSpecName: "utilities") pod "20990512-5147-4de8-95e0-f40e2156f395" (UID: "20990512-5147-4de8-95e0-f40e2156f395"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.140429 4869 scope.go:117] "RemoveContainer" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" Feb 02 14:38:30 crc kubenswrapper[4869]: E0202 14:38:30.140827 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c\": container with ID starting with 6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c not found: ID does not exist" containerID="6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.140867 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c"} err="failed to get container status \"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c\": rpc error: code = NotFound desc = could not find container \"6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c\": container with ID starting with 6fc07e629352a605fe07933ebf4108c9145df1f62b704b74e49d27114534622c not found: ID does not exist" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.140936 4869 scope.go:117] "RemoveContainer" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" Feb 02 14:38:30 crc kubenswrapper[4869]: E0202 14:38:30.141188 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126\": container with ID starting with 7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126 not found: ID does not exist" containerID="7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141240 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126"} err="failed to get container status \"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126\": rpc error: code = NotFound desc = could not find container \"7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126\": container with ID starting with 7f80d236aab15af624602ae99d48b8c03a60e6257808e3881f49077d0d0dc126 not found: ID does not exist" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141261 4869 scope.go:117] "RemoveContainer" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" Feb 02 14:38:30 crc kubenswrapper[4869]: E0202 14:38:30.141517 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb\": container with ID starting with 2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb not found: ID does not exist" containerID="2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141552 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb"} err="failed to get container status \"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb\": rpc error: code = NotFound desc = could not find container \"2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb\": container with ID starting with 2c7f75283d68e5662a20650d8de945ca3d05cd064a874631bb45d810e91d0fdb not found: ID does not exist" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.141756 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd" (OuterVolumeSpecName: "kube-api-access-cd4wd") pod "20990512-5147-4de8-95e0-f40e2156f395" (UID: "20990512-5147-4de8-95e0-f40e2156f395"). InnerVolumeSpecName "kube-api-access-cd4wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.186210 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20990512-5147-4de8-95e0-f40e2156f395" (UID: "20990512-5147-4de8-95e0-f40e2156f395"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.209245 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.237155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd4wd\" (UniqueName: \"kubernetes.io/projected/20990512-5147-4de8-95e0-f40e2156f395-kube-api-access-cd4wd\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.237196 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.237210 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20990512-5147-4de8-95e0-f40e2156f395-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.384256 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.390536 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.448701 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.529110 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.549834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.645196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") pod \"c0c32a61-d689-4c79-8348-90c8ab61b594\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.645311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") pod \"c0c32a61-d689-4c79-8348-90c8ab61b594\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.645353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") pod \"c0c32a61-d689-4c79-8348-90c8ab61b594\" (UID: \"c0c32a61-d689-4c79-8348-90c8ab61b594\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.646308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities" (OuterVolumeSpecName: "utilities") pod "c0c32a61-d689-4c79-8348-90c8ab61b594" (UID: "c0c32a61-d689-4c79-8348-90c8ab61b594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.649472 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw" (OuterVolumeSpecName: "kube-api-access-4x5bw") pod "c0c32a61-d689-4c79-8348-90c8ab61b594" (UID: "c0c32a61-d689-4c79-8348-90c8ab61b594"). InnerVolumeSpecName "kube-api-access-4x5bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.650197 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.658698 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.676601 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") pod \"7bc37994-d436-4a72-93dd-610683ab871f\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") pod \"35334030-48c7-4d7e-b202-75371c2c74f0\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") pod \"35334030-48c7-4d7e-b202-75371c2c74f0\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747296 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") pod \"7bc37994-d436-4a72-93dd-610683ab871f\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") pod \"7bc37994-d436-4a72-93dd-610683ab871f\" (UID: \"7bc37994-d436-4a72-93dd-610683ab871f\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") pod \"35334030-48c7-4d7e-b202-75371c2c74f0\" (UID: \"35334030-48c7-4d7e-b202-75371c2c74f0\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747731 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.747744 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x5bw\" (UniqueName: \"kubernetes.io/projected/c0c32a61-d689-4c79-8348-90c8ab61b594-kube-api-access-4x5bw\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.748974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities" (OuterVolumeSpecName: "utilities") pod "35334030-48c7-4d7e-b202-75371c2c74f0" (UID: "35334030-48c7-4d7e-b202-75371c2c74f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.748968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities" (OuterVolumeSpecName: "utilities") pod "7bc37994-d436-4a72-93dd-610683ab871f" (UID: "7bc37994-d436-4a72-93dd-610683ab871f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.754247 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm" (OuterVolumeSpecName: "kube-api-access-44bcm") pod "7bc37994-d436-4a72-93dd-610683ab871f" (UID: "7bc37994-d436-4a72-93dd-610683ab871f"). InnerVolumeSpecName "kube-api-access-44bcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.754326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn" (OuterVolumeSpecName: "kube-api-access-zpswn") pod "35334030-48c7-4d7e-b202-75371c2c74f0" (UID: "35334030-48c7-4d7e-b202-75371c2c74f0"). InnerVolumeSpecName "kube-api-access-zpswn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.773477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bc37994-d436-4a72-93dd-610683ab871f" (UID: "7bc37994-d436-4a72-93dd-610683ab871f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.797147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.803810 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35334030-48c7-4d7e-b202-75371c2c74f0" (UID: "35334030-48c7-4d7e-b202-75371c2c74f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.810602 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0c32a61-d689-4c79-8348-90c8ab61b594" (UID: "c0c32a61-d689-4c79-8348-90c8ab61b594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.848578 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") pod \"ee31f112-5156-4239-a760-fb4c6bb9673d\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") pod \"ee31f112-5156-4239-a760-fb4c6bb9673d\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") pod \"ee31f112-5156-4239-a760-fb4c6bb9673d\" (UID: \"ee31f112-5156-4239-a760-fb4c6bb9673d\") " Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849639 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849726 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44bcm\" (UniqueName: \"kubernetes.io/projected/7bc37994-d436-4a72-93dd-610683ab871f-kube-api-access-44bcm\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849805 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849866 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpswn\" (UniqueName: \"kubernetes.io/projected/35334030-48c7-4d7e-b202-75371c2c74f0-kube-api-access-zpswn\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849953 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bc37994-d436-4a72-93dd-610683ab871f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.850027 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c32a61-d689-4c79-8348-90c8ab61b594-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.850104 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35334030-48c7-4d7e-b202-75371c2c74f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.849970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ee31f112-5156-4239-a760-fb4c6bb9673d" (UID: "ee31f112-5156-4239-a760-fb4c6bb9673d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.855928 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl" (OuterVolumeSpecName: "kube-api-access-fglxl") pod "ee31f112-5156-4239-a760-fb4c6bb9673d" (UID: "ee31f112-5156-4239-a760-fb4c6bb9673d"). InnerVolumeSpecName "kube-api-access-fglxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.856653 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ee31f112-5156-4239-a760-fb4c6bb9673d" (UID: "ee31f112-5156-4239-a760-fb4c6bb9673d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.890456 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.951485 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.951536 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee31f112-5156-4239-a760-fb4c6bb9673d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:30 crc kubenswrapper[4869]: I0202 14:38:30.951549 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglxl\" (UniqueName: \"kubernetes.io/projected/ee31f112-5156-4239-a760-fb4c6bb9673d-kube-api-access-fglxl\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.067779 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7wp9" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.067773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7wp9" event={"ID":"c0c32a61-d689-4c79-8348-90c8ab61b594","Type":"ContainerDied","Data":"4b24ce2f2248f4687d66222d8d64c3f4c7ab1a667da994a65103b5daf7f6074a"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.067960 4869 scope.go:117] "RemoveContainer" containerID="4e950d5166ad52c9759c793235c659981b981ee18242acc5362e3347f45fd149" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.071210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" event={"ID":"ee31f112-5156-4239-a760-fb4c6bb9673d","Type":"ContainerDied","Data":"abf150712433e6a69bcdbac96eb8f5a7e4f4678220a199cb5fef1de1079707b8"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.071681 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xl8hj" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.074754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9pgx" event={"ID":"35334030-48c7-4d7e-b202-75371c2c74f0","Type":"ContainerDied","Data":"8d9df88387111e57bb9b1545d6cad7ddb2c341d0c3125931bf95ce3cfbbe8249"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.074835 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9pgx" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.079976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrnr2" event={"ID":"7bc37994-d436-4a72-93dd-610683ab871f","Type":"ContainerDied","Data":"b1580b4316ca71373b5cb2c825bf6078883c98f4a09960236d48783fdf4eb2b0"} Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.080019 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrnr2" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.080385 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.089380 4869 scope.go:117] "RemoveContainer" containerID="26b06ae64272a38d354c10e93d5b78b359d2c42ba63c10fa86dde8816377339c" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.109509 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.113545 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h9pgx"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.122493 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.126595 4869 scope.go:117] "RemoveContainer" containerID="5bd8c5ee8e9e88d2880af3adebbdb0e7854ddadb441729295abb6d7e6958afdd" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.135847 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.141402 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xl8hj"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.148743 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.156362 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k7wp9"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.157148 4869 scope.go:117] "RemoveContainer" containerID="86d480521de92a1c10ef10815a46b5964f911171ebb84ddcd7d082934561032a" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.160581 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.165356 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrnr2"] Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.171199 4869 scope.go:117] "RemoveContainer" containerID="0d7544a33c4728eb616399a49bc213ee02ddda2474451ec7c72c35c4b44c16d6" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.188179 4869 scope.go:117] "RemoveContainer" containerID="0b2c3ac4d08f82b7a5fad7e7219bf53013c9b65776a69054e3a436bb3b5edd60" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.206524 4869 scope.go:117] "RemoveContainer" containerID="cec776d323dbe8236b1c9db4384ebac1fa16daa022330512eaace0844c3b9f88" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.221359 4869 scope.go:117] "RemoveContainer" containerID="1c4c3e93ecbc7617327522dfacd5633cdb7970a5b4bcc862bfe0f20a55158712" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.244177 4869 scope.go:117] "RemoveContainer" containerID="5adb81683a3033beec8093b130282168a76c6d84454acac94fe5c2d0d6d3406d" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.261586 4869 scope.go:117] "RemoveContainer" containerID="cdd5576f9f5156d7b56f7ccd77833310c25ec9af1f7cd6b12b8a45a03d8370d2" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.468976 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20990512-5147-4de8-95e0-f40e2156f395" path="/var/lib/kubelet/pods/20990512-5147-4de8-95e0-f40e2156f395/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.469773 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" path="/var/lib/kubelet/pods/35334030-48c7-4d7e-b202-75371c2c74f0/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.470407 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bc37994-d436-4a72-93dd-610683ab871f" path="/var/lib/kubelet/pods/7bc37994-d436-4a72-93dd-610683ab871f/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.471519 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" path="/var/lib/kubelet/pods/c0c32a61-d689-4c79-8348-90c8ab61b594/volumes" Feb 02 14:38:31 crc kubenswrapper[4869]: I0202 14:38:31.472309 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" path="/var/lib/kubelet/pods/ee31f112-5156-4239-a760-fb4c6bb9673d/volumes" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.857130 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.857557 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983143 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983388 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983473 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983506 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983806 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983822 4869 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983832 4869 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.983841 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:32 crc kubenswrapper[4869]: I0202 14:38:32.992430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.086154 4869 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.097821 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.097901 4869 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" exitCode=137 Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.097984 4869 scope.go:117] "RemoveContainer" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.098055 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.116702 4869 scope.go:117] "RemoveContainer" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" Feb 02 14:38:33 crc kubenswrapper[4869]: E0202 14:38:33.117648 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2\": container with ID starting with b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2 not found: ID does not exist" containerID="b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.117718 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2"} err="failed to get container status \"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2\": rpc error: code = NotFound desc = could not find container \"b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2\": container with ID starting with b512524314e83235eec137d0d409bad2a658621203aca725253ebef613f855f2 not found: ID does not exist" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.470697 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.471503 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.483034 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.483070 4869 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1e0f1580-dcf4-4d0f-9452-87e32349b7e4" Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.486967 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 14:38:33 crc kubenswrapper[4869]: I0202 14:38:33.487012 4869 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1e0f1580-dcf4-4d0f-9452-87e32349b7e4" Feb 02 14:38:43 crc kubenswrapper[4869]: I0202 14:38:43.222197 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 14:38:43 crc kubenswrapper[4869]: I0202 14:38:43.645293 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 14:38:44 crc kubenswrapper[4869]: I0202 14:38:44.665783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.188416 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190627 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190680 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48" exitCode=137 Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"660069e36a1bb103bae58fec584944b9504a8f75ba2c79dc7efbec7710875e48"} Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1ea5b7458c59608c72e3a8c6859a0b53705310e26f2ff2566fc22841a8f80c2a"} Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.190767 4869 scope.go:117] "RemoveContainer" containerID="24da1d0545ca8fdeb6fbbc1701ffba5f415cdd3b97a2eafcd35643dce80baa53" Feb 02 14:38:47 crc kubenswrapper[4869]: I0202 14:38:47.247188 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 14:38:48 crc kubenswrapper[4869]: I0202 14:38:48.198958 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 14:38:50 crc kubenswrapper[4869]: I0202 14:38:50.118736 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.128136 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.180194 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.184477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.255596 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 14:38:56 crc kubenswrapper[4869]: I0202 14:38:56.393407 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 14:39:01 crc kubenswrapper[4869]: I0202 14:39:01.765474 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659502 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbjts"] Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659780 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659794 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659813 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659822 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659833 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659849 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659857 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659866 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659873 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659885 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659892 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659927 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659934 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659949 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659968 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659974 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659983 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerName="installer" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.659990 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerName="installer" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.659999 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660007 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660016 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660023 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="extract-utilities" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="extract-content" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660049 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660055 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" Feb 02 14:39:02 crc kubenswrapper[4869]: E0202 14:39:02.660063 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660175 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="20990512-5147-4de8-95e0-f40e2156f395" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660186 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc37994-d436-4a72-93dd-610683ab871f" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660197 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660204 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee31f112-5156-4239-a760-fb4c6bb9673d" containerName="marketplace-operator" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660212 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9baf6ea4-8ab5-47b2-b5da-cb9b1978db5a" containerName="installer" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660219 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="35334030-48c7-4d7e-b202-75371c2c74f0" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660228 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c32a61-d689-4c79-8348-90c8ab61b594" containerName="registry-server" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.660727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.664403 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.666631 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.666930 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.667575 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.671875 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.681370 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbjts"] Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.767201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.767256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvf6\" (UniqueName: \"kubernetes.io/projected/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-kube-api-access-6rvf6\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.767290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.868829 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.868925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvf6\" (UniqueName: \"kubernetes.io/projected/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-kube-api-access-6rvf6\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.868978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.872263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.884215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.903212 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvf6\" (UniqueName: \"kubernetes.io/projected/ac6a4d49-eb04-4ee1-be26-63f67b0a092a-kube-api-access-6rvf6\") pod \"marketplace-operator-79b997595-nbjts\" (UID: \"ac6a4d49-eb04-4ee1-be26-63f67b0a092a\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:02 crc kubenswrapper[4869]: I0202 14:39:02.980202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:03 crc kubenswrapper[4869]: I0202 14:39:03.418648 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbjts"] Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.145595 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.296114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" event={"ID":"ac6a4d49-eb04-4ee1-be26-63f67b0a092a","Type":"ContainerStarted","Data":"b2e029c65d6e48d2645c3fb492df9d470b266ae7b404a1a2155b1b79d629205e"} Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.296183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" event={"ID":"ac6a4d49-eb04-4ee1-be26-63f67b0a092a","Type":"ContainerStarted","Data":"3eb2749fc9592070d3e3312a947ae9e8dfe258360eea4d3e751f4bb67da2ad1e"} Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.296427 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.299463 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" Feb 02 14:39:04 crc kubenswrapper[4869]: I0202 14:39:04.315227 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nbjts" podStartSLOduration=2.3152014530000002 podStartE2EDuration="2.315201453s" podCreationTimestamp="2026-02-02 14:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:39:04.315041798 +0000 UTC m=+345.959678568" watchObservedRunningTime="2026-02-02 14:39:04.315201453 +0000 UTC m=+345.959838223" Feb 02 14:39:05 crc kubenswrapper[4869]: I0202 14:39:05.253080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.380401 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ndh2z"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.382208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.386496 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.393524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndh2z"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.454342 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46dm\" (UniqueName: \"kubernetes.io/projected/13714902-1992-4167-97b5-f3465ce5038f-kube-api-access-m46dm\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.454472 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-utilities\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.454521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-catalog-content\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.555408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-utilities\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.555477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-catalog-content\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.555553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m46dm\" (UniqueName: \"kubernetes.io/projected/13714902-1992-4167-97b5-f3465ce5038f-kube-api-access-m46dm\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.556109 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-utilities\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.556142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13714902-1992-4167-97b5-f3465ce5038f-catalog-content\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.578462 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m46dm\" (UniqueName: \"kubernetes.io/projected/13714902-1992-4167-97b5-f3465ce5038f-kube-api-access-m46dm\") pod \"redhat-operators-ndh2z\" (UID: \"13714902-1992-4167-97b5-f3465ce5038f\") " pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.581373 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjh6d"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.583518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.586413 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.591190 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjh6d"] Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.657064 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmm5v\" (UniqueName: \"kubernetes.io/projected/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-kube-api-access-cmm5v\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.657177 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-catalog-content\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.657197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-utilities\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.701028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-utilities\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759158 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-catalog-content\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmm5v\" (UniqueName: \"kubernetes.io/projected/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-kube-api-access-cmm5v\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-utilities\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.759795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-catalog-content\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.781735 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmm5v\" (UniqueName: \"kubernetes.io/projected/5e1c62bb-e047-4367-9cd0-572ac75fd6f6-kube-api-access-cmm5v\") pod \"certified-operators-xjh6d\" (UID: \"5e1c62bb-e047-4367-9cd0-572ac75fd6f6\") " pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:08 crc kubenswrapper[4869]: I0202 14:39:08.918419 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.122902 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjh6d"] Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.149284 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ndh2z"] Feb 02 14:39:09 crc kubenswrapper[4869]: W0202 14:39:09.156877 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13714902_1992_4167_97b5_f3465ce5038f.slice/crio-06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a WatchSource:0}: Error finding container 06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a: Status 404 returned error can't find the container with id 06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.335378 4869 generic.go:334] "Generic (PLEG): container finished" podID="13714902-1992-4167-97b5-f3465ce5038f" containerID="70989fe11ed14396a31642b6c670ee78915afd5b782f6428feb661ae40b98ce9" exitCode=0 Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.335472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerDied","Data":"70989fe11ed14396a31642b6c670ee78915afd5b782f6428feb661ae40b98ce9"} Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.335895 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerStarted","Data":"06a6bdd4349391969f8c35b406e4d27ba4a3a45bed65800b6f56feea63ff741a"} Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.340405 4869 generic.go:334] "Generic (PLEG): container finished" podID="5e1c62bb-e047-4367-9cd0-572ac75fd6f6" containerID="363a6e67ae8e4aad0256851aded7eebb05e4e2c2143f2c26da007d4540107db2" exitCode=0 Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.340478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerDied","Data":"363a6e67ae8e4aad0256851aded7eebb05e4e2c2143f2c26da007d4540107db2"} Feb 02 14:39:09 crc kubenswrapper[4869]: I0202 14:39:09.340519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerStarted","Data":"39c11bf6bed1d7b894405306c81ec4e98b915aefee768e0f754abb720e2c0c31"} Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.177314 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7q5gz"] Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.178631 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.182308 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.202869 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q5gz"] Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.281265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-utilities\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.281393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-catalog-content\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.281457 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4cz\" (UniqueName: \"kubernetes.io/projected/395af9bf-292b-41d1-a4ad-e4983331bc2d-kube-api-access-tq4cz\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.347886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerStarted","Data":"fb751668cace5e796104b0026041db69850061244a846942538cac63d0630eea"} Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.383682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-utilities\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.383751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-catalog-content\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.383775 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq4cz\" (UniqueName: \"kubernetes.io/projected/395af9bf-292b-41d1-a4ad-e4983331bc2d-kube-api-access-tq4cz\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.384455 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-utilities\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.384552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/395af9bf-292b-41d1-a4ad-e4983331bc2d-catalog-content\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.408266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq4cz\" (UniqueName: \"kubernetes.io/projected/395af9bf-292b-41d1-a4ad-e4983331bc2d-kube-api-access-tq4cz\") pod \"community-operators-7q5gz\" (UID: \"395af9bf-292b-41d1-a4ad-e4983331bc2d\") " pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.516753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:10 crc kubenswrapper[4869]: I0202 14:39:10.923534 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q5gz"] Feb 02 14:39:10 crc kubenswrapper[4869]: W0202 14:39:10.929198 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod395af9bf_292b_41d1_a4ad_e4983331bc2d.slice/crio-501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06 WatchSource:0}: Error finding container 501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06: Status 404 returned error can't find the container with id 501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06 Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.045807 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.175156 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hh8gt"] Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.176404 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.179101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.227286 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh8gt"] Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.299327 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9tml\" (UniqueName: \"kubernetes.io/projected/59d9a56c-d3b3-438c-8047-097cb18004b1-kube-api-access-z9tml\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.299396 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-catalog-content\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.299434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-utilities\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.356580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerStarted","Data":"e1e7140bac7235af94cee6b6434ebda86378ecaef383f2e6f017a7c810a50cf2"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.359368 4869 generic.go:334] "Generic (PLEG): container finished" podID="5e1c62bb-e047-4367-9cd0-572ac75fd6f6" containerID="fb751668cace5e796104b0026041db69850061244a846942538cac63d0630eea" exitCode=0 Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.359410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerDied","Data":"fb751668cace5e796104b0026041db69850061244a846942538cac63d0630eea"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.361405 4869 generic.go:334] "Generic (PLEG): container finished" podID="395af9bf-292b-41d1-a4ad-e4983331bc2d" containerID="7a31feacb682d936469883a845a376b5718e8a273369759d9c64ae025eba3375" exitCode=0 Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.361448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerDied","Data":"7a31feacb682d936469883a845a376b5718e8a273369759d9c64ae025eba3375"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.361476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerStarted","Data":"501049222b7ae4b82c4c0607218c41ac98e9b96feec883f82892305d12a80f06"} Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.401173 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-catalog-content\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.401310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-utilities\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.401511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9tml\" (UniqueName: \"kubernetes.io/projected/59d9a56c-d3b3-438c-8047-097cb18004b1-kube-api-access-z9tml\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.402523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-catalog-content\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.403239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59d9a56c-d3b3-438c-8047-097cb18004b1-utilities\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.445024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9tml\" (UniqueName: \"kubernetes.io/projected/59d9a56c-d3b3-438c-8047-097cb18004b1-kube-api-access-z9tml\") pod \"redhat-marketplace-hh8gt\" (UID: \"59d9a56c-d3b3-438c-8047-097cb18004b1\") " pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.502172 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:11 crc kubenswrapper[4869]: I0202 14:39:11.920519 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hh8gt"] Feb 02 14:39:11 crc kubenswrapper[4869]: W0202 14:39:11.932036 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59d9a56c_d3b3_438c_8047_097cb18004b1.slice/crio-2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24 WatchSource:0}: Error finding container 2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24: Status 404 returned error can't find the container with id 2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24 Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.371013 4869 generic.go:334] "Generic (PLEG): container finished" podID="13714902-1992-4167-97b5-f3465ce5038f" containerID="e1e7140bac7235af94cee6b6434ebda86378ecaef383f2e6f017a7c810a50cf2" exitCode=0 Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.371130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerDied","Data":"e1e7140bac7235af94cee6b6434ebda86378ecaef383f2e6f017a7c810a50cf2"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.375836 4869 generic.go:334] "Generic (PLEG): container finished" podID="59d9a56c-d3b3-438c-8047-097cb18004b1" containerID="b9985f7429a2c20cbb511e0d24812ea1a14753155ff1da9a07857a29232435e8" exitCode=0 Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.375885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerDied","Data":"b9985f7429a2c20cbb511e0d24812ea1a14753155ff1da9a07857a29232435e8"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.375955 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerStarted","Data":"2fa8f1354e6116354194520bc4fdf4b7f7df4c9be744a0528f5d3d4f4de72d24"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.378822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjh6d" event={"ID":"5e1c62bb-e047-4367-9cd0-572ac75fd6f6","Type":"ContainerStarted","Data":"57553b2bca8f926094153a6c4f01060b889ab14dcd6016ab000b936c1106578e"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.382622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerStarted","Data":"849d9e1614ff29db841e7f9af8ed8e15dcbbf2f5c650a9ffc2905934514e6149"} Feb 02 14:39:12 crc kubenswrapper[4869]: I0202 14:39:12.444655 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjh6d" podStartSLOduration=1.977796587 podStartE2EDuration="4.444625124s" podCreationTimestamp="2026-02-02 14:39:08 +0000 UTC" firstStartedPulling="2026-02-02 14:39:09.343140166 +0000 UTC m=+350.987776936" lastFinishedPulling="2026-02-02 14:39:11.809968703 +0000 UTC m=+353.454605473" observedRunningTime="2026-02-02 14:39:12.440864132 +0000 UTC m=+354.085500902" watchObservedRunningTime="2026-02-02 14:39:12.444625124 +0000 UTC m=+354.089261894" Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.391522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ndh2z" event={"ID":"13714902-1992-4167-97b5-f3465ce5038f","Type":"ContainerStarted","Data":"a7b0389137253af6d37e348c1d32878d1bec9ddf49549469a52daa6efff33817"} Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.394258 4869 generic.go:334] "Generic (PLEG): container finished" podID="59d9a56c-d3b3-438c-8047-097cb18004b1" containerID="944668fb54c4b310f8a7b8e62680329cda99afcb1e1be80d5665ae6eb46ba989" exitCode=0 Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.394345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerDied","Data":"944668fb54c4b310f8a7b8e62680329cda99afcb1e1be80d5665ae6eb46ba989"} Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.396738 4869 generic.go:334] "Generic (PLEG): container finished" podID="395af9bf-292b-41d1-a4ad-e4983331bc2d" containerID="849d9e1614ff29db841e7f9af8ed8e15dcbbf2f5c650a9ffc2905934514e6149" exitCode=0 Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.396796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerDied","Data":"849d9e1614ff29db841e7f9af8ed8e15dcbbf2f5c650a9ffc2905934514e6149"} Feb 02 14:39:13 crc kubenswrapper[4869]: I0202 14:39:13.414795 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ndh2z" podStartSLOduration=1.953279125 podStartE2EDuration="5.414778307s" podCreationTimestamp="2026-02-02 14:39:08 +0000 UTC" firstStartedPulling="2026-02-02 14:39:09.338118262 +0000 UTC m=+350.982755032" lastFinishedPulling="2026-02-02 14:39:12.799617444 +0000 UTC m=+354.444254214" observedRunningTime="2026-02-02 14:39:13.414205592 +0000 UTC m=+355.058842382" watchObservedRunningTime="2026-02-02 14:39:13.414778307 +0000 UTC m=+355.059415077" Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.403942 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q5gz" event={"ID":"395af9bf-292b-41d1-a4ad-e4983331bc2d","Type":"ContainerStarted","Data":"fe7f752c7371146161b3322d19658bbf3624d19b144a69cd2446d1591c6d5154"} Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.407084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hh8gt" event={"ID":"59d9a56c-d3b3-438c-8047-097cb18004b1","Type":"ContainerStarted","Data":"5c08a05becbc3df96b38abb582855ce693566fa31b29b170cf6a5dbdd37b6239"} Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.450087 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7q5gz" podStartSLOduration=1.99627111 podStartE2EDuration="4.450071668s" podCreationTimestamp="2026-02-02 14:39:10 +0000 UTC" firstStartedPulling="2026-02-02 14:39:11.363418064 +0000 UTC m=+353.008054834" lastFinishedPulling="2026-02-02 14:39:13.817218622 +0000 UTC m=+355.461855392" observedRunningTime="2026-02-02 14:39:14.430051937 +0000 UTC m=+356.074688707" watchObservedRunningTime="2026-02-02 14:39:14.450071668 +0000 UTC m=+356.094708438" Feb 02 14:39:14 crc kubenswrapper[4869]: I0202 14:39:14.450828 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hh8gt" podStartSLOduration=1.91828054 podStartE2EDuration="3.450822677s" podCreationTimestamp="2026-02-02 14:39:11 +0000 UTC" firstStartedPulling="2026-02-02 14:39:12.377543506 +0000 UTC m=+354.022180276" lastFinishedPulling="2026-02-02 14:39:13.910085643 +0000 UTC m=+355.554722413" observedRunningTime="2026-02-02 14:39:14.449320999 +0000 UTC m=+356.093957769" watchObservedRunningTime="2026-02-02 14:39:14.450822677 +0000 UTC m=+356.095459447" Feb 02 14:39:15 crc kubenswrapper[4869]: I0202 14:39:15.305030 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:39:15 crc kubenswrapper[4869]: I0202 14:39:15.305110 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.701876 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.702350 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.746490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.918781 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.918859 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:18 crc kubenswrapper[4869]: I0202 14:39:18.976307 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:19 crc kubenswrapper[4869]: I0202 14:39:19.488680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjh6d" Feb 02 14:39:19 crc kubenswrapper[4869]: I0202 14:39:19.489223 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ndh2z" Feb 02 14:39:20 crc kubenswrapper[4869]: I0202 14:39:20.518211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:20 crc kubenswrapper[4869]: I0202 14:39:20.518259 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:20 crc kubenswrapper[4869]: I0202 14:39:20.574537 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.492997 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7q5gz" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.504109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.504203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:21 crc kubenswrapper[4869]: I0202 14:39:21.548649 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:22 crc kubenswrapper[4869]: I0202 14:39:22.508839 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hh8gt" Feb 02 14:39:45 crc kubenswrapper[4869]: I0202 14:39:45.304178 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:39:45 crc kubenswrapper[4869]: I0202 14:39:45.305273 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.701237 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvqz"] Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.702618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.725092 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvqz"] Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.797745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b0f13f-4134-4679-9f31-aef45d67a17e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.797835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f13f-4134-4679-9f31-aef45d67a17e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-trusted-ca\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798409 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-certificates\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzlh\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-kube-api-access-gkzlh\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.798585 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-tls\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.832689 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899677 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b0f13f-4134-4679-9f31-aef45d67a17e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f13f-4134-4679-9f31-aef45d67a17e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-trusted-ca\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-certificates\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzlh\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-kube-api-access-gkzlh\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.899957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-tls\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.900492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68b0f13f-4134-4679-9f31-aef45d67a17e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.901866 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-certificates\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.902556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68b0f13f-4134-4679-9f31-aef45d67a17e-trusted-ca\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.908037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68b0f13f-4134-4679-9f31-aef45d67a17e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.908606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-registry-tls\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.918125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:49 crc kubenswrapper[4869]: I0202 14:39:49.919044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzlh\" (UniqueName: \"kubernetes.io/projected/68b0f13f-4134-4679-9f31-aef45d67a17e-kube-api-access-gkzlh\") pod \"image-registry-66df7c8f76-cfvqz\" (UID: \"68b0f13f-4134-4679-9f31-aef45d67a17e\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.021819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.444000 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvqz"] Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.632837 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" event={"ID":"68b0f13f-4134-4679-9f31-aef45d67a17e","Type":"ContainerStarted","Data":"32aed9b4821c581a755924819611e1370a6a0f4dcb8740689d02a250b4b34b9e"} Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.632899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" event={"ID":"68b0f13f-4134-4679-9f31-aef45d67a17e","Type":"ContainerStarted","Data":"bfb6bf8a6f3421fa190fbe7d00511fb3c6e005376a108640c41108024d9c8e31"} Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.633007 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:39:50 crc kubenswrapper[4869]: I0202 14:39:50.654879 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" podStartSLOduration=1.654861419 podStartE2EDuration="1.654861419s" podCreationTimestamp="2026-02-02 14:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:39:50.650227355 +0000 UTC m=+392.294864145" watchObservedRunningTime="2026-02-02 14:39:50.654861419 +0000 UTC m=+392.299498179" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.598609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.599636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.600974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.612901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:55 crc kubenswrapper[4869]: I0202 14:39:55.863404 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.613864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.614770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.622547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.622569 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.671006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3522fd55108264ab7d8c239ae644ed2ab9033308946e948fcf49170011ce4de1"} Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.671060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70137a8a2f3c20fb6a39efa808c246b234aab2a7c954f80bbd0795e5f798f3f9"} Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.763093 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:39:56 crc kubenswrapper[4869]: I0202 14:39:56.867214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 14:39:57 crc kubenswrapper[4869]: W0202 14:39:57.070592 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40 WatchSource:0}: Error finding container a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40: Status 404 returned error can't find the container with id a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40 Feb 02 14:39:57 crc kubenswrapper[4869]: W0202 14:39:57.119302 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a WatchSource:0}: Error finding container e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a: Status 404 returned error can't find the container with id e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.677454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"840587376a761b12f0164e7ebc684fac3c74f6d95b8a3d7695db7160ea95cd4c"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.677957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e64a996288904f0c153f914c025ff66d17d0938a9115012b43d2881b4f0d551a"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.679582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1d3d7e2acd859a1a5e44debb32a8531cebcbe65c335e23d8ffaee1119f5492e9"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.679620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a9d7795010bda515a9d4e7c2c97aa0d56dbaf432907e52f0c24bd70a78b5fd40"} Feb 02 14:39:57 crc kubenswrapper[4869]: I0202 14:39:57.679795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:40:10 crc kubenswrapper[4869]: I0202 14:40:10.029788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cfvqz" Feb 02 14:40:10 crc kubenswrapper[4869]: I0202 14:40:10.094418 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.304038 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.305760 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.305877 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.306668 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.306830 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24" gracePeriod=600 Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821280 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24" exitCode=0 Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24"} Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa"} Feb 02 14:40:15 crc kubenswrapper[4869]: I0202 14:40:15.821442 4869 scope.go:117] "RemoveContainer" containerID="322667c83789af95c306ed822aa1d8a35bb4feee27bf8b5ac48d7e4f46e7df9b" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.143684 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" containerID="cri-o://d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" gracePeriod=30 Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.522230 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670516 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.670930 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.671001 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") pod \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\" (UID: \"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97\") " Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.672586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.672740 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.678348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.678837 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx" (OuterVolumeSpecName: "kube-api-access-2xsnx") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "kube-api-access-2xsnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.679425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.683898 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.689554 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.690574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" (UID: "dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773071 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773135 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773147 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773163 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773172 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xsnx\" (UniqueName: \"kubernetes.io/projected/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-kube-api-access-2xsnx\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773182 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.773190 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972362 4869 generic.go:334] "Generic (PLEG): container finished" podID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" exitCode=0 Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerDied","Data":"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a"} Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" event={"ID":"dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97","Type":"ContainerDied","Data":"01667812f7e6645cb860ced8b102804d576ed3f29c6ca44dd1412aa113ccd9cf"} Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972465 4869 scope.go:117] "RemoveContainer" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.972608 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-42krp" Feb 02 14:40:35 crc kubenswrapper[4869]: I0202 14:40:35.997057 4869 scope.go:117] "RemoveContainer" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" Feb 02 14:40:36 crc kubenswrapper[4869]: E0202 14:40:36.000779 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a\": container with ID starting with d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a not found: ID does not exist" containerID="d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a" Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.000899 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a"} err="failed to get container status \"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a\": rpc error: code = NotFound desc = could not find container \"d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a\": container with ID starting with d059b87f8f3ed8eef5f1866c112cbe6514cdb398d2b48106d26457d9b067911a not found: ID does not exist" Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.014309 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.020167 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-42krp"] Feb 02 14:40:36 crc kubenswrapper[4869]: I0202 14:40:36.777823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 14:40:37 crc kubenswrapper[4869]: I0202 14:40:37.475778 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" path="/var/lib/kubelet/pods/dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97/volumes" Feb 02 14:42:15 crc kubenswrapper[4869]: I0202 14:42:15.304458 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:42:15 crc kubenswrapper[4869]: I0202 14:42:15.306182 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:42:45 crc kubenswrapper[4869]: I0202 14:42:45.304868 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:42:45 crc kubenswrapper[4869]: I0202 14:42:45.305621 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.304770 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.305561 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.305638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.306399 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.306472 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa" gracePeriod=600 Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.968706 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa" exitCode=0 Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.968806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa"} Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.969476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486"} Feb 02 14:43:15 crc kubenswrapper[4869]: I0202 14:43:15.969512 4869 scope.go:117] "RemoveContainer" containerID="cc8af1c0b0e0fdab0489147c37a0fdb880776d375afd2a5de0984fdc40531c24" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.363515 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-498mc"] Feb 02 14:44:11 crc kubenswrapper[4869]: E0202 14:44:11.364616 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.364639 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.364800 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe54b4f-c3d6-40ec-8d5d-422b6d86ad97" containerName="registry" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.365396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.368557 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.368585 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.368607 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-66t7x" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.376596 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7j57w"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.377662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.390695 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-498mc"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.397855 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vd825" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.415921 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7j57w"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.434586 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dfqjm"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.435703 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.437020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm56c\" (UniqueName: \"kubernetes.io/projected/92227558-4fbe-40b7-8a51-f9ba7043125a-kube-api-access-nm56c\") pod \"cert-manager-cainjector-cf98fcc89-498mc\" (UID: \"92227558-4fbe-40b7-8a51-f9ba7043125a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.437124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvjl\" (UniqueName: \"kubernetes.io/projected/d96c83c3-8f98-40c8-85f8-37cdf10eaeb7-kube-api-access-9vvjl\") pod \"cert-manager-858654f9db-7j57w\" (UID: \"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7\") " pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.439361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5c7xq" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.445475 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dfqjm"] Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.539892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvjl\" (UniqueName: \"kubernetes.io/projected/d96c83c3-8f98-40c8-85f8-37cdf10eaeb7-kube-api-access-9vvjl\") pod \"cert-manager-858654f9db-7j57w\" (UID: \"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7\") " pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.540379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdq7\" (UniqueName: \"kubernetes.io/projected/804bb5fc-4d8e-4f9f-892b-6d9af2943dbd-kube-api-access-xgdq7\") pod \"cert-manager-webhook-687f57d79b-dfqjm\" (UID: \"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.540421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm56c\" (UniqueName: \"kubernetes.io/projected/92227558-4fbe-40b7-8a51-f9ba7043125a-kube-api-access-nm56c\") pod \"cert-manager-cainjector-cf98fcc89-498mc\" (UID: \"92227558-4fbe-40b7-8a51-f9ba7043125a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.566140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm56c\" (UniqueName: \"kubernetes.io/projected/92227558-4fbe-40b7-8a51-f9ba7043125a-kube-api-access-nm56c\") pod \"cert-manager-cainjector-cf98fcc89-498mc\" (UID: \"92227558-4fbe-40b7-8a51-f9ba7043125a\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.566186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvjl\" (UniqueName: \"kubernetes.io/projected/d96c83c3-8f98-40c8-85f8-37cdf10eaeb7-kube-api-access-9vvjl\") pod \"cert-manager-858654f9db-7j57w\" (UID: \"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7\") " pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.641795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdq7\" (UniqueName: \"kubernetes.io/projected/804bb5fc-4d8e-4f9f-892b-6d9af2943dbd-kube-api-access-xgdq7\") pod \"cert-manager-webhook-687f57d79b-dfqjm\" (UID: \"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.664823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdq7\" (UniqueName: \"kubernetes.io/projected/804bb5fc-4d8e-4f9f-892b-6d9af2943dbd-kube-api-access-xgdq7\") pod \"cert-manager-webhook-687f57d79b-dfqjm\" (UID: \"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.689521 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.702999 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7j57w" Feb 02 14:44:11 crc kubenswrapper[4869]: I0202 14:44:11.759569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.068399 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dfqjm"] Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.078797 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.192755 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-498mc"] Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.195951 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7j57w"] Feb 02 14:44:12 crc kubenswrapper[4869]: W0202 14:44:12.200591 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92227558_4fbe_40b7_8a51_f9ba7043125a.slice/crio-4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b WatchSource:0}: Error finding container 4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b: Status 404 returned error can't find the container with id 4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b Feb 02 14:44:12 crc kubenswrapper[4869]: W0202 14:44:12.202857 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96c83c3_8f98_40c8_85f8_37cdf10eaeb7.slice/crio-83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5 WatchSource:0}: Error finding container 83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5: Status 404 returned error can't find the container with id 83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5 Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.332769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7j57w" event={"ID":"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7","Type":"ContainerStarted","Data":"83a9a494d08e310642efbdfcf8c5935b45230f66b4fbcb19370d983accf62dd5"} Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.334361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" event={"ID":"92227558-4fbe-40b7-8a51-f9ba7043125a","Type":"ContainerStarted","Data":"4994d6ebd85ca925e822fc17a88dfd9e3c4dcb6e2547b012400157cf4cb5801b"} Feb 02 14:44:12 crc kubenswrapper[4869]: I0202 14:44:12.335682 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" event={"ID":"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd","Type":"ContainerStarted","Data":"65051957c6e408a3cb9a29d050951c2b90d76c6dd42e58fb0d821538e0a2e0e9"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.430397 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7j57w" event={"ID":"d96c83c3-8f98-40c8-85f8-37cdf10eaeb7","Type":"ContainerStarted","Data":"0f22eb8fa541be17ecade5beb6c29aff2ab4b25b0f1cb555ca484a406d45f81b"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.433092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" event={"ID":"92227558-4fbe-40b7-8a51-f9ba7043125a","Type":"ContainerStarted","Data":"a13b64ac43b4ac85dd7f9f794c3d9573e2f89b04d18ee26f581cb4a91a2b1bf1"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.435488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" event={"ID":"804bb5fc-4d8e-4f9f-892b-6d9af2943dbd","Type":"ContainerStarted","Data":"2b1494928ffdf68d62788d8e79f52641c3176be54728602de0852e36e5b9607b"} Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.435662 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.451846 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7j57w" podStartSLOduration=2.519794061 podStartE2EDuration="7.451818227s" podCreationTimestamp="2026-02-02 14:44:11 +0000 UTC" firstStartedPulling="2026-02-02 14:44:12.210674421 +0000 UTC m=+653.855311191" lastFinishedPulling="2026-02-02 14:44:17.142698587 +0000 UTC m=+658.787335357" observedRunningTime="2026-02-02 14:44:18.445138812 +0000 UTC m=+660.089775582" watchObservedRunningTime="2026-02-02 14:44:18.451818227 +0000 UTC m=+660.096454997" Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.470807 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" podStartSLOduration=2.2600140189999998 podStartE2EDuration="7.470782455s" podCreationTimestamp="2026-02-02 14:44:11 +0000 UTC" firstStartedPulling="2026-02-02 14:44:12.07855907 +0000 UTC m=+653.723195840" lastFinishedPulling="2026-02-02 14:44:17.289327506 +0000 UTC m=+658.933964276" observedRunningTime="2026-02-02 14:44:18.469131185 +0000 UTC m=+660.113767955" watchObservedRunningTime="2026-02-02 14:44:18.470782455 +0000 UTC m=+660.115419225" Feb 02 14:44:18 crc kubenswrapper[4869]: I0202 14:44:18.496076 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-498mc" podStartSLOduration=2.414307538 podStartE2EDuration="7.496049899s" podCreationTimestamp="2026-02-02 14:44:11 +0000 UTC" firstStartedPulling="2026-02-02 14:44:12.203530885 +0000 UTC m=+653.848167655" lastFinishedPulling="2026-02-02 14:44:17.285273246 +0000 UTC m=+658.929910016" observedRunningTime="2026-02-02 14:44:18.490263656 +0000 UTC m=+660.134900426" watchObservedRunningTime="2026-02-02 14:44:18.496049899 +0000 UTC m=+660.140686669" Feb 02 14:44:26 crc kubenswrapper[4869]: I0202 14:44:26.763161 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dfqjm" Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.882784 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.885894 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" containerID="cri-o://879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886029 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" containerID="cri-o://f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886091 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" containerID="cri-o://2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886186 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" containerID="cri-o://6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886280 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" containerID="cri-o://236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886254 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.886251 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" containerID="cri-o://42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" gracePeriod=30 Feb 02 14:44:43 crc kubenswrapper[4869]: I0202 14:44:43.920733 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" containerID="cri-o://4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" gracePeriod=30 Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.738548 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.738582 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.740048 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.740044 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741668 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741690 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741718 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:44 crc kubenswrapper[4869]: E0202 14:44:44.741765 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.589724 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.592726 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.593960 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" exitCode=143 Feb 02 14:44:45 crc kubenswrapper[4869]: I0202 14:44:45.594049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.729809 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovnkube-controller/3.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.735701 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.736668 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-controller/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737287 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" exitCode=0 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737317 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" exitCode=0 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737326 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" exitCode=0 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737335 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" exitCode=143 Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737400 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737412 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.737429 4869 scope.go:117] "RemoveContainer" containerID="63bc2c9bc90b9fab3d75a45efcf106325408f08ff1ab4e7b2ad5b92cad760ee0" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.830550 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.831416 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-controller/0.log" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.833589 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.914943 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7pc72"] Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915230 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915247 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915261 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915269 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915283 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915290 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915303 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915310 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915322 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915329 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915339 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kubecfg-setup" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915346 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kubecfg-setup" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915361 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915368 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915377 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915384 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915392 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915399 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915407 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915416 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915434 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915557 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-acl-logging" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915566 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="nbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915578 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915587 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="sbdb" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915598 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915607 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915614 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915624 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="kube-rbac-proxy-node" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915633 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovn-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915642 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="northd" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915655 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915778 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: E0202 14:44:49.915789 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915804 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.915959 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" containerName="ovnkube-controller" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.918046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946718 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946850 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946869 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946930 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946946 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.946984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947131 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947162 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947201 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947284 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.947373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") pod \"2865336a-500d-43e5-a075-a9a8fa01b929\" (UID: \"2865336a-500d-43e5-a075-a9a8fa01b929\") " Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948595 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948642 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash" (OuterVolumeSpecName: "host-slash") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log" (OuterVolumeSpecName: "node-log") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948685 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948708 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948729 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948779 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket" (OuterVolumeSpecName: "log-socket") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.948805 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951522 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.951823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.952580 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.958460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk" (OuterVolumeSpecName: "kube-api-access-r9lzk") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "kube-api-access-r9lzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:44:49 crc kubenswrapper[4869]: I0202 14:44:49.982261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.010590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2865336a-500d-43e5-a075-a9a8fa01b929" (UID: "2865336a-500d-43e5-a075-a9a8fa01b929"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmflx\" (UniqueName: \"kubernetes.io/projected/87557492-f711-45db-abc2-beb315e8aad6-kube-api-access-hmflx\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-var-lib-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048776 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-script-lib\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-kubelet\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-bin\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-netd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-slash\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048864 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-ovn\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048950 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-systemd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-node-log\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.048984 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-systemd-units\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-etc-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-log-socket\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049045 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-env-overrides\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-netns\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049084 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87557492-f711-45db-abc2-beb315e8aad6-ovn-node-metrics-cert\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-config\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049148 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049248 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049407 4869 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049423 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049433 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049443 4869 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049452 4869 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049461 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2865336a-500d-43e5-a075-a9a8fa01b929-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049471 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049508 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049524 4869 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049537 4869 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049546 4869 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049556 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lzk\" (UniqueName: \"kubernetes.io/projected/2865336a-500d-43e5-a075-a9a8fa01b929-kube-api-access-r9lzk\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049566 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2865336a-500d-43e5-a075-a9a8fa01b929-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049574 4869 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049582 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049590 4869 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049612 4869 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.049621 4869 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2865336a-500d-43e5-a075-a9a8fa01b929-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150540 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87557492-f711-45db-abc2-beb315e8aad6-ovn-node-metrics-cert\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-config\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmflx\" (UniqueName: \"kubernetes.io/projected/87557492-f711-45db-abc2-beb315e8aad6-kube-api-access-hmflx\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-var-lib-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150722 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150750 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-script-lib\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150774 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-kubelet\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-bin\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-slash\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-var-lib-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-netd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-netd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.150978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-ovn\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151141 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-systemd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151181 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-node-log\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-ovn\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-systemd-units\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-systemd-units\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-node-log\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-run-systemd\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151334 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-ovn-kubernetes\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151383 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-kubelet\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151402 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-slash\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151411 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-cni-bin\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-etc-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-config\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-ovnkube-script-lib\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-etc-openvswitch\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-log-socket\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-log-socket\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-env-overrides\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-netns\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.151872 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87557492-f711-45db-abc2-beb315e8aad6-host-run-netns\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.152284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87557492-f711-45db-abc2-beb315e8aad6-env-overrides\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.155038 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87557492-f711-45db-abc2-beb315e8aad6-ovn-node-metrics-cert\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.171376 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmflx\" (UniqueName: \"kubernetes.io/projected/87557492-f711-45db-abc2-beb315e8aad6-kube-api-access-hmflx\") pod \"ovnkube-node-7pc72\" (UID: \"87557492-f711-45db-abc2-beb315e8aad6\") " pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.240753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.745545 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/2.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.746882 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/1.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.746977 4869 generic.go:334] "Generic (PLEG): container finished" podID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" exitCode=2 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.747114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerDied","Data":"9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.747226 4869 scope.go:117] "RemoveContainer" containerID="e899fae987cd1b3609a802f3eb2056f109d894dce6fd65a6f3c25c2e91b71e8a" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.747994 4869 scope.go:117] "RemoveContainer" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" Feb 02 14:44:50 crc kubenswrapper[4869]: E0202 14:44:50.748228 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-d9vfd_openshift-multus(45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0)\"" pod="openshift-multus/multus-d9vfd" podUID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.749075 4869 generic.go:334] "Generic (PLEG): container finished" podID="87557492-f711-45db-abc2-beb315e8aad6" containerID="8a3f19721a174c0e4bcdc49eaa3b066e19b9f2c36326a3f3437ec28910709dd3" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.749176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerDied","Data":"8a3f19721a174c0e4bcdc49eaa3b066e19b9f2c36326a3f3437ec28910709dd3"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.749211 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"92cf3f0e5b2246382d6a71f4fd45d0dbd5ee40c72954ad51a49843bcff8dfeda"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.760399 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-acl-logging/0.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.760968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qmsw6_2865336a-500d-43e5-a075-a9a8fa01b929/ovn-controller/0.log" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761495 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761542 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761550 4869 generic.go:334] "Generic (PLEG): container finished" podID="2865336a-500d-43e5-a075-a9a8fa01b929" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" exitCode=0 Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761627 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.761763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qmsw6" event={"ID":"2865336a-500d-43e5-a075-a9a8fa01b929","Type":"ContainerDied","Data":"ca0e0f37b2bf3d240e5eeec5425678446780834f9687e86b8adc4295de855905"} Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.840484 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.840889 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.859412 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qmsw6"] Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.877231 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.910990 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.929082 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.952731 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:50 crc kubenswrapper[4869]: I0202 14:44:50.970218 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.001158 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.021171 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.053239 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.080878 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.081966 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082003 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} err="failed to get container status \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082037 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.082310 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082337 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} err="failed to get container status \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082351 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.082693 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082715 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} err="failed to get container status \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.082728 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.083053 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083099 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} err="failed to get container status \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083118 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.083460 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083487 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} err="failed to get container status \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083503 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.083891 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083941 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} err="failed to get container status \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.083960 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.084185 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084212 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} err="failed to get container status \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084229 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.084561 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084589 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} err="failed to get container status \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.084603 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: E0202 14:44:51.084981 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085004 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} err="failed to get container status \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085018 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085303 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} err="failed to get container status \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085324 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085631 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} err="failed to get container status \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.085653 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086167 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} err="failed to get container status \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086188 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086516 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} err="failed to get container status \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.086572 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087026 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} err="failed to get container status \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087050 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087360 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} err="failed to get container status \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087382 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087690 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} err="failed to get container status \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.087716 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088055 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} err="failed to get container status \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088139 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088413 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} err="failed to get container status \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088441 4869 scope.go:117] "RemoveContainer" containerID="4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088688 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30"} err="failed to get container status \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": rpc error: code = NotFound desc = could not find container \"4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30\": container with ID starting with 4d06fd0ff0c1764ab182c16c881a85105f909077c23d515d1c8fc1eadc725a30 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.088717 4869 scope.go:117] "RemoveContainer" containerID="6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089120 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb"} err="failed to get container status \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": rpc error: code = NotFound desc = could not find container \"6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb\": container with ID starting with 6f4ac5602124e7c70f56278465a51fc61c553a1cc6e660e9eb34f499fd53e6cb not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089225 4869 scope.go:117] "RemoveContainer" containerID="42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089768 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c"} err="failed to get container status \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": rpc error: code = NotFound desc = could not find container \"42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c\": container with ID starting with 42d4d32e9ba4ceb8c65ebf6cd1f7526b77b2348d80facddb4ce280945483059c not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.089821 4869 scope.go:117] "RemoveContainer" containerID="f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.090180 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0"} err="failed to get container status \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": rpc error: code = NotFound desc = could not find container \"f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0\": container with ID starting with f56af1f63fe1d135011c38386b4d8b53edeaa61c318cd9856fafee89084394b0 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.090265 4869 scope.go:117] "RemoveContainer" containerID="58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.090988 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9"} err="failed to get container status \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": rpc error: code = NotFound desc = could not find container \"58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9\": container with ID starting with 58796146bf86c742491a787341536d4a843bfff2f2fa11613ed7a5939c6c7bb9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.091124 4869 scope.go:117] "RemoveContainer" containerID="2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.091535 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f"} err="failed to get container status \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": rpc error: code = NotFound desc = could not find container \"2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f\": container with ID starting with 2daf6b0d6843fa5f2022e3e0994d97317566f7a7c169550be5323da57ec8542f not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.091564 4869 scope.go:117] "RemoveContainer" containerID="236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092067 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5"} err="failed to get container status \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": rpc error: code = NotFound desc = could not find container \"236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5\": container with ID starting with 236c62b76cc215e9884eb5674197b107f85929175007f32c1477e670b5baa9b5 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092155 4869 scope.go:117] "RemoveContainer" containerID="879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092564 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9"} err="failed to get container status \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": rpc error: code = NotFound desc = could not find container \"879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9\": container with ID starting with 879f2e5b49f702cc42429cd9cd5ffaadacef6c0b26a33ae3c4a096eb61a74df9 not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.092649 4869 scope.go:117] "RemoveContainer" containerID="dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.093004 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a"} err="failed to get container status \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": rpc error: code = NotFound desc = could not find container \"dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a\": container with ID starting with dc5ed5f25c51a5f5490dd9be6ba206f378d808189fd730120e9ab5fec426539a not found: ID does not exist" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.474656 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2865336a-500d-43e5-a075-a9a8fa01b929" path="/var/lib/kubelet/pods/2865336a-500d-43e5-a075-a9a8fa01b929/volumes" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.769587 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/2.log" Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"c9be79deef99612295b7caa7dfca1612968b0e5ae16bff7d0d78a32b3e5807a1"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"8fa02ad9a4443651471568a5d67224ebf6ebcece67c3a49555b87761685be987"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"a0728cb3ce9d558c36ab033e2398d50677da9edeced8b02c52b008cf61e15c43"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"f6abfd62c36997603b140d03f8b50ff845abb9c387eb8ba76826ace576df937c"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773518 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"09fb08f5fd7c070b0c8a8b94cd9b0f840dd624f10bab8306e99ce06f4ac386ef"} Feb 02 14:44:51 crc kubenswrapper[4869]: I0202 14:44:51.773527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"2762f0391747bceb22a017d0b2f1ac6b6f793cec083e1076db37abe1eed4dea2"} Feb 02 14:44:54 crc kubenswrapper[4869]: I0202 14:44:54.805209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"6840f38e8939d3f45764974f9e560c0780bfc4658b38bb920e707f73314d714c"} Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.823145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" event={"ID":"87557492-f711-45db-abc2-beb315e8aad6","Type":"ContainerStarted","Data":"94d77fc4a29ff2ad3e13b72e28a5645353aa7c282ce742e8c2988760370ef712"} Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.823998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.824018 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.856070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:56 crc kubenswrapper[4869]: I0202 14:44:56.861175 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" podStartSLOduration=7.861148851 podStartE2EDuration="7.861148851s" podCreationTimestamp="2026-02-02 14:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:44:56.858652339 +0000 UTC m=+698.503289129" watchObservedRunningTime="2026-02-02 14:44:56.861148851 +0000 UTC m=+698.505785621" Feb 02 14:44:57 crc kubenswrapper[4869]: I0202 14:44:57.829820 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:44:57 crc kubenswrapper[4869]: I0202 14:44:57.862490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.186041 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.187608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.190412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.191093 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.198184 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.216677 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.216790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.216830 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.318247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.318339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.318358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.319311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.325991 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.337807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"collect-profiles-29500725-v4bfh\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.511066 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541208 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541327 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541358 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.541448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(cd4c689348b890aa55e120acced8b3913914f1ec6556fc5c4fe3b4e1d2e23789): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.847706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: I0202 14:45:00.848379 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.873429 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.874150 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.874204 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:00 crc kubenswrapper[4869]: E0202 14:45:00.874277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(474040a1c1b74cf8a215250e90fd2ade2e46dc7d86c2f49a940cefbbeeafd7d2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" Feb 02 14:45:04 crc kubenswrapper[4869]: I0202 14:45:04.463222 4869 scope.go:117] "RemoveContainer" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" Feb 02 14:45:04 crc kubenswrapper[4869]: E0202 14:45:04.464385 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-d9vfd_openshift-multus(45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0)\"" pod="openshift-multus/multus-d9vfd" podUID="45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0" Feb 02 14:45:14 crc kubenswrapper[4869]: I0202 14:45:14.463239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: I0202 14:45:14.464244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495128 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495225 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495255 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:14 crc kubenswrapper[4869]: E0202 14:45:14.495312 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager(f4a6eca8-9d17-4791-add2-36c7119da5a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29500725-v4bfh_openshift-operator-lifecycle-manager_f4a6eca8-9d17-4791-add2-36c7119da5a5_0(85087dd2151b060448d3d5eccc886f79ec3cc14e0f169743fbc8a4636dd30c1c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.304212 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.304754 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.403752 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4"] Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.405073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.408103 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.420054 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4"] Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.444020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.444076 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.444118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.544802 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.544912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.544990 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.545624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.546053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.568173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.722115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.751948 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.752067 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.752102 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.752173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(f52d8f76822de2fa1c0494d874bd5da847cdf1d4a5deff22a28972bcdc49a248): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.935648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: I0202 14:45:15.936364 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959362 4869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959520 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959601 4869 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:15 crc kubenswrapper[4869]: E0202 14:45:15.959716 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace(264a08a0-30f5-4b76-af09-b97629a44d89)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_openshift-marketplace_264a08a0-30f5-4b76-af09-b97629a44d89_0(a9d06cea4b514d4f86229b09dd0bb909f10c76c30604280caf7951e155931acb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" Feb 02 14:45:19 crc kubenswrapper[4869]: I0202 14:45:19.466081 4869 scope.go:117] "RemoveContainer" containerID="9e8e2fba78eed62ec5a7c03e3d1e35248cd3c609ba63e74c7eaf0be37126fdc9" Feb 02 14:45:19 crc kubenswrapper[4869]: I0202 14:45:19.962697 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d9vfd_45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0/kube-multus/2.log" Feb 02 14:45:19 crc kubenswrapper[4869]: I0202 14:45:19.963248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d9vfd" event={"ID":"45d6e7c8-73e5-47b2-9f6b-ea686e63f2e0","Type":"ContainerStarted","Data":"56ecb779755ed2fcdbb7598926faae2bd7dfcd26dd50f7a81b3afee1529e398a"} Feb 02 14:45:20 crc kubenswrapper[4869]: I0202 14:45:20.262982 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7pc72" Feb 02 14:45:25 crc kubenswrapper[4869]: I0202 14:45:25.462078 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:25 crc kubenswrapper[4869]: I0202 14:45:25.464526 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:25 crc kubenswrapper[4869]: I0202 14:45:25.892978 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 14:45:25 crc kubenswrapper[4869]: W0202 14:45:25.906583 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4a6eca8_9d17_4791_add2_36c7119da5a5.slice/crio-a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea WatchSource:0}: Error finding container a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea: Status 404 returned error can't find the container with id a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea Feb 02 14:45:26 crc kubenswrapper[4869]: I0202 14:45:26.005492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerStarted","Data":"a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea"} Feb 02 14:45:27 crc kubenswrapper[4869]: I0202 14:45:27.018122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerStarted","Data":"28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31"} Feb 02 14:45:28 crc kubenswrapper[4869]: I0202 14:45:28.025459 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerID="28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31" exitCode=0 Feb 02 14:45:28 crc kubenswrapper[4869]: I0202 14:45:28.025532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerDied","Data":"28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31"} Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.247252 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.352288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") pod \"f4a6eca8-9d17-4791-add2-36c7119da5a5\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.352381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") pod \"f4a6eca8-9d17-4791-add2-36c7119da5a5\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.352444 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") pod \"f4a6eca8-9d17-4791-add2-36c7119da5a5\" (UID: \"f4a6eca8-9d17-4791-add2-36c7119da5a5\") " Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.353967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "f4a6eca8-9d17-4791-add2-36c7119da5a5" (UID: "f4a6eca8-9d17-4791-add2-36c7119da5a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.361004 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f4a6eca8-9d17-4791-add2-36c7119da5a5" (UID: "f4a6eca8-9d17-4791-add2-36c7119da5a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.361149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft" (OuterVolumeSpecName: "kube-api-access-djxft") pod "f4a6eca8-9d17-4791-add2-36c7119da5a5" (UID: "f4a6eca8-9d17-4791-add2-36c7119da5a5"). InnerVolumeSpecName "kube-api-access-djxft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.454497 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4a6eca8-9d17-4791-add2-36c7119da5a5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.454578 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxft\" (UniqueName: \"kubernetes.io/projected/f4a6eca8-9d17-4791-add2-36c7119da5a5-kube-api-access-djxft\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.454613 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f4a6eca8-9d17-4791-add2-36c7119da5a5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.461897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.468420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:29 crc kubenswrapper[4869]: I0202 14:45:29.681366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4"] Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.041726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerStarted","Data":"3d55704d4b09f212b5146fa8b98350280e9257c874ccbfd3096bb9d93f76f046"} Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.045197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" event={"ID":"f4a6eca8-9d17-4791-add2-36c7119da5a5","Type":"ContainerDied","Data":"a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea"} Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.045484 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a939c461b137e338a79087b02f62ff390242b08877a42e0188714090bdec17ea" Feb 02 14:45:30 crc kubenswrapper[4869]: I0202 14:45:30.045344 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh" Feb 02 14:45:31 crc kubenswrapper[4869]: I0202 14:45:31.054024 4869 generic.go:334] "Generic (PLEG): container finished" podID="264a08a0-30f5-4b76-af09-b97629a44d89" containerID="dbeb3dc825ddaeab08d8880d37488299a02f6c4ff1dc855f4e1c5730b37c3cd1" exitCode=0 Feb 02 14:45:31 crc kubenswrapper[4869]: I0202 14:45:31.054210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"dbeb3dc825ddaeab08d8880d37488299a02f6c4ff1dc855f4e1c5730b37c3cd1"} Feb 02 14:45:33 crc kubenswrapper[4869]: I0202 14:45:33.069647 4869 generic.go:334] "Generic (PLEG): container finished" podID="264a08a0-30f5-4b76-af09-b97629a44d89" containerID="9ec0f3627a9f2311679c1c3553aa17b3c4552ddf0042b3602aa64ae0827531d3" exitCode=0 Feb 02 14:45:33 crc kubenswrapper[4869]: I0202 14:45:33.070156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"9ec0f3627a9f2311679c1c3553aa17b3c4552ddf0042b3602aa64ae0827531d3"} Feb 02 14:45:34 crc kubenswrapper[4869]: I0202 14:45:34.077758 4869 generic.go:334] "Generic (PLEG): container finished" podID="264a08a0-30f5-4b76-af09-b97629a44d89" containerID="f7b02e4164f64e068a6c2ef52f128d0be24196b740fc6632ad07b6bb50424192" exitCode=0 Feb 02 14:45:34 crc kubenswrapper[4869]: I0202 14:45:34.077819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"f7b02e4164f64e068a6c2ef52f128d0be24196b740fc6632ad07b6bb50424192"} Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.320856 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.444244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") pod \"264a08a0-30f5-4b76-af09-b97629a44d89\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.444427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") pod \"264a08a0-30f5-4b76-af09-b97629a44d89\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.444554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") pod \"264a08a0-30f5-4b76-af09-b97629a44d89\" (UID: \"264a08a0-30f5-4b76-af09-b97629a44d89\") " Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.446188 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle" (OuterVolumeSpecName: "bundle") pod "264a08a0-30f5-4b76-af09-b97629a44d89" (UID: "264a08a0-30f5-4b76-af09-b97629a44d89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.454272 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5" (OuterVolumeSpecName: "kube-api-access-zj9f5") pod "264a08a0-30f5-4b76-af09-b97629a44d89" (UID: "264a08a0-30f5-4b76-af09-b97629a44d89"). InnerVolumeSpecName "kube-api-access-zj9f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.546745 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9f5\" (UniqueName: \"kubernetes.io/projected/264a08a0-30f5-4b76-af09-b97629a44d89-kube-api-access-zj9f5\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.546802 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.675083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util" (OuterVolumeSpecName: "util") pod "264a08a0-30f5-4b76-af09-b97629a44d89" (UID: "264a08a0-30f5-4b76-af09-b97629a44d89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:45:35 crc kubenswrapper[4869]: I0202 14:45:35.750171 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264a08a0-30f5-4b76-af09-b97629a44d89-util\") on node \"crc\" DevicePath \"\"" Feb 02 14:45:36 crc kubenswrapper[4869]: I0202 14:45:36.093783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" event={"ID":"264a08a0-30f5-4b76-af09-b97629a44d89","Type":"ContainerDied","Data":"3d55704d4b09f212b5146fa8b98350280e9257c874ccbfd3096bb9d93f76f046"} Feb 02 14:45:36 crc kubenswrapper[4869]: I0202 14:45:36.094356 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4" Feb 02 14:45:36 crc kubenswrapper[4869]: I0202 14:45:36.094367 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d55704d4b09f212b5146fa8b98350280e9257c874ccbfd3096bb9d93f76f046" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094155 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bbvzg"] Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094685 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="extract" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094698 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="extract" Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094710 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="util" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094716 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="util" Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094733 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerName="collect-profiles" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094740 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerName="collect-profiles" Feb 02 14:45:42 crc kubenswrapper[4869]: E0202 14:45:42.094751 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="pull" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094757 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="pull" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" containerName="collect-profiles" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.094858 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="264a08a0-30f5-4b76-af09-b97629a44d89" containerName="extract" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.095339 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.099150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ft4ld" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.099390 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.100726 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.116803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bbvzg"] Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.258710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwb8\" (UniqueName: \"kubernetes.io/projected/f417537d-ce1d-461c-afec-09d3ec96c3b4-kube-api-access-hxwb8\") pod \"nmstate-operator-646758c888-bbvzg\" (UID: \"f417537d-ce1d-461c-afec-09d3ec96c3b4\") " pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.360841 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwb8\" (UniqueName: \"kubernetes.io/projected/f417537d-ce1d-461c-afec-09d3ec96c3b4-kube-api-access-hxwb8\") pod \"nmstate-operator-646758c888-bbvzg\" (UID: \"f417537d-ce1d-461c-afec-09d3ec96c3b4\") " pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.389494 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwb8\" (UniqueName: \"kubernetes.io/projected/f417537d-ce1d-461c-afec-09d3ec96c3b4-kube-api-access-hxwb8\") pod \"nmstate-operator-646758c888-bbvzg\" (UID: \"f417537d-ce1d-461c-afec-09d3ec96c3b4\") " pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.411505 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" Feb 02 14:45:42 crc kubenswrapper[4869]: I0202 14:45:42.670040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bbvzg"] Feb 02 14:45:43 crc kubenswrapper[4869]: I0202 14:45:43.145635 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" event={"ID":"f417537d-ce1d-461c-afec-09d3ec96c3b4","Type":"ContainerStarted","Data":"c13ef5637d3dab855332c53a9870a82b68730461e297e1d5bc7d98f2d0db85ca"} Feb 02 14:45:45 crc kubenswrapper[4869]: I0202 14:45:45.304509 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:45:45 crc kubenswrapper[4869]: I0202 14:45:45.305082 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:45:46 crc kubenswrapper[4869]: I0202 14:45:46.164227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" event={"ID":"f417537d-ce1d-461c-afec-09d3ec96c3b4","Type":"ContainerStarted","Data":"acb4e608d7cc70546f4cc78b7c4f3cd38adf113ad0c4c0da4c37da3930a0db3d"} Feb 02 14:45:46 crc kubenswrapper[4869]: I0202 14:45:46.185760 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bbvzg" podStartSLOduration=1.795509509 podStartE2EDuration="4.185731211s" podCreationTimestamp="2026-02-02 14:45:42 +0000 UTC" firstStartedPulling="2026-02-02 14:45:42.674223065 +0000 UTC m=+744.318859835" lastFinishedPulling="2026-02-02 14:45:45.064444777 +0000 UTC m=+746.709081537" observedRunningTime="2026-02-02 14:45:46.183878626 +0000 UTC m=+747.828515426" watchObservedRunningTime="2026-02-02 14:45:46.185731211 +0000 UTC m=+747.830367981" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.119378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-647lw"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.121134 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.124820 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5f6cd" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.135017 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5gp\" (UniqueName: \"kubernetes.io/projected/ec9ec105-2660-4787-89f3-5c0fe79e8e97-kube-api-access-zx5gp\") pod \"nmstate-metrics-54757c584b-647lw\" (UID: \"ec9ec105-2660-4787-89f3-5c0fe79e8e97\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.135395 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.136613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.138880 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-647lw"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.140601 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.169415 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-87g86"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.170447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.217973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236447 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fz7\" (UniqueName: \"kubernetes.io/projected/3d92c75a-462e-4ff9-8373-8d91fb2624f4-kube-api-access-t4fz7\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5gp\" (UniqueName: \"kubernetes.io/projected/ec9ec105-2660-4787-89f3-5c0fe79e8e97-kube-api-access-zx5gp\") pod \"nmstate-metrics-54757c584b-647lw\" (UID: \"ec9ec105-2660-4787-89f3-5c0fe79e8e97\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236609 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-ovs-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdbn\" (UniqueName: \"kubernetes.io/projected/bd339f13-8405-47aa-b76a-2cef40d3ec11-kube-api-access-rfdbn\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236695 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-nmstate-lock\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.236724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-dbus-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.272209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5gp\" (UniqueName: \"kubernetes.io/projected/ec9ec105-2660-4787-89f3-5c0fe79e8e97-kube-api-access-zx5gp\") pod \"nmstate-metrics-54757c584b-647lw\" (UID: \"ec9ec105-2660-4787-89f3-5c0fe79e8e97\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.274016 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.274846 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.282451 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pzplm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.282714 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.282839 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.290742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338424 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4fz7\" (UniqueName: \"kubernetes.io/projected/3d92c75a-462e-4ff9-8373-8d91fb2624f4-kube-api-access-t4fz7\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfn7v\" (UniqueName: \"kubernetes.io/projected/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-kube-api-access-lfn7v\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-ovs-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdbn\" (UniqueName: \"kubernetes.io/projected/bd339f13-8405-47aa-b76a-2cef40d3ec11-kube-api-access-rfdbn\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-nmstate-lock\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338755 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-dbus-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338751 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-ovs-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.338833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-nmstate-lock\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.339161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d92c75a-462e-4ff9-8373-8d91fb2624f4-dbus-socket\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: E0202 14:45:47.339326 4869 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 14:45:47 crc kubenswrapper[4869]: E0202 14:45:47.339413 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair podName:bd339f13-8405-47aa-b76a-2cef40d3ec11 nodeName:}" failed. No retries permitted until 2026-02-02 14:45:47.839385475 +0000 UTC m=+749.484022245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jf287" (UID: "bd339f13-8405-47aa-b76a-2cef40d3ec11") : secret "openshift-nmstate-webhook" not found Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.359736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4fz7\" (UniqueName: \"kubernetes.io/projected/3d92c75a-462e-4ff9-8373-8d91fb2624f4-kube-api-access-t4fz7\") pod \"nmstate-handler-87g86\" (UID: \"3d92c75a-462e-4ff9-8373-8d91fb2624f4\") " pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.360146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdbn\" (UniqueName: \"kubernetes.io/projected/bd339f13-8405-47aa-b76a-2cef40d3ec11-kube-api-access-rfdbn\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.438319 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.439986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.440095 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfn7v\" (UniqueName: \"kubernetes.io/projected/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-kube-api-access-lfn7v\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.440129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.440989 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.443581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.470361 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfn7v\" (UniqueName: \"kubernetes.io/projected/60ca7e15-9af2-4019-9481-39f8bc9e4ec7-kube-api-access-lfn7v\") pod \"nmstate-console-plugin-7754f76f8b-sk72x\" (UID: \"60ca7e15-9af2-4019-9481-39f8bc9e4ec7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.490073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.517600 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-865678f777-2fzjm"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.518551 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: W0202 14:45:47.532415 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d92c75a_462e_4ff9_8373_8d91fb2624f4.slice/crio-085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8 WatchSource:0}: Error finding container 085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8: Status 404 returned error can't find the container with id 085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8 Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.534060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865678f777-2fzjm"] Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545610 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-oauth-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545697 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-trusted-ca-bundle\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qkp\" (UniqueName: \"kubernetes.io/projected/272b4fd8-4ae3-4f19-a95e-1824605ae399-kube-api-access-84qkp\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-oauth-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545787 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-service-ca\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.545821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.613158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-oauth-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-trusted-ca-bundle\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647364 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qkp\" (UniqueName: \"kubernetes.io/projected/272b4fd8-4ae3-4f19-a95e-1824605ae399-kube-api-access-84qkp\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-oauth-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.647416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-service-ca\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.649007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-service-ca\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.649824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-oauth-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.649943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-trusted-ca-bundle\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.650568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.653739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-serving-cert\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.657295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/272b4fd8-4ae3-4f19-a95e-1824605ae399-console-oauth-config\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.669978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qkp\" (UniqueName: \"kubernetes.io/projected/272b4fd8-4ae3-4f19-a95e-1824605ae399-kube-api-access-84qkp\") pod \"console-865678f777-2fzjm\" (UID: \"272b4fd8-4ae3-4f19-a95e-1824605ae399\") " pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.845316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.850452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.855022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bd339f13-8405-47aa-b76a-2cef40d3ec11-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jf287\" (UID: \"bd339f13-8405-47aa-b76a-2cef40d3ec11\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:47 crc kubenswrapper[4869]: I0202 14:45:47.967633 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-647lw"] Feb 02 14:45:48 crc kubenswrapper[4869]: W0202 14:45:48.048820 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9ec105_2660_4787_89f3_5c0fe79e8e97.slice/crio-7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a WatchSource:0}: Error finding container 7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a: Status 404 returned error can't find the container with id 7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.058719 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.126814 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x"] Feb 02 14:45:48 crc kubenswrapper[4869]: W0202 14:45:48.129134 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60ca7e15_9af2_4019_9481_39f8bc9e4ec7.slice/crio-347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503 WatchSource:0}: Error finding container 347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503: Status 404 returned error can't find the container with id 347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503 Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.190509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" event={"ID":"60ca7e15-9af2-4019-9481-39f8bc9e4ec7","Type":"ContainerStarted","Data":"347db200ab305de67e09cd67857bf0649a66be5e857b8cf125adbfbfa324c503"} Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.191963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-87g86" event={"ID":"3d92c75a-462e-4ff9-8373-8d91fb2624f4","Type":"ContainerStarted","Data":"085ce1228da7a6141b144e4ff9567603c7d294580573712c0eec06d220f16fd8"} Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.193413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" event={"ID":"ec9ec105-2660-4787-89f3-5c0fe79e8e97","Type":"ContainerStarted","Data":"7388d9f69316062c39e5346ffcd277cf51649616af47406f60ad567e8132657a"} Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.294582 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287"] Feb 02 14:45:48 crc kubenswrapper[4869]: I0202 14:45:48.417777 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865678f777-2fzjm"] Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.206776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" event={"ID":"bd339f13-8405-47aa-b76a-2cef40d3ec11","Type":"ContainerStarted","Data":"949d8c0b962ddfab3414b1ba43800a57513de10d43d4a68906d4a12aa0e88898"} Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.209663 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865678f777-2fzjm" event={"ID":"272b4fd8-4ae3-4f19-a95e-1824605ae399","Type":"ContainerStarted","Data":"7d0a72d0def9e1954932bd02c027699bcc2f0e0170223aa2ff5d374046c4657c"} Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.209691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865678f777-2fzjm" event={"ID":"272b4fd8-4ae3-4f19-a95e-1824605ae399","Type":"ContainerStarted","Data":"e9a4b0336aa9d9bd269f0d7c1d0acd23be9d2b7b846ab4eb4b7352fb1b115fac"} Feb 02 14:45:49 crc kubenswrapper[4869]: I0202 14:45:49.243897 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-865678f777-2fzjm" podStartSLOduration=2.243867299 podStartE2EDuration="2.243867299s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:45:49.237069651 +0000 UTC m=+750.881706441" watchObservedRunningTime="2026-02-02 14:45:49.243867299 +0000 UTC m=+750.888504069" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.268636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-87g86" event={"ID":"3d92c75a-462e-4ff9-8373-8d91fb2624f4","Type":"ContainerStarted","Data":"0807e97b912b347068af22b0f6836def97bf498254497a3e9833b930a6cf14d1"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.269311 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.273416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" event={"ID":"ec9ec105-2660-4787-89f3-5c0fe79e8e97","Type":"ContainerStarted","Data":"8d05a0649952c134f323ae6ba387754e4a9b01acae2733778dd56021c1900585"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.275868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" event={"ID":"60ca7e15-9af2-4019-9481-39f8bc9e4ec7","Type":"ContainerStarted","Data":"0291c6d878063957af807812412b2174d87efad86a7f36996de7f795e1b5b967"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.277713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" event={"ID":"bd339f13-8405-47aa-b76a-2cef40d3ec11","Type":"ContainerStarted","Data":"153ce006075cf4c3e3bf02efdcbdfdac87f7fdf9af6f76b12f222bbade8c4d89"} Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.278131 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.292317 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-87g86" podStartSLOduration=1.485079391 podStartE2EDuration="8.292287868s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:47.53450416 +0000 UTC m=+749.179140930" lastFinishedPulling="2026-02-02 14:45:54.341712637 +0000 UTC m=+755.986349407" observedRunningTime="2026-02-02 14:45:55.288433753 +0000 UTC m=+756.933070523" watchObservedRunningTime="2026-02-02 14:45:55.292287868 +0000 UTC m=+756.936924658" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.310137 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" podStartSLOduration=2.156463441 podStartE2EDuration="8.310106518s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:48.305291884 +0000 UTC m=+749.949928654" lastFinishedPulling="2026-02-02 14:45:54.458934961 +0000 UTC m=+756.103571731" observedRunningTime="2026-02-02 14:45:55.305533235 +0000 UTC m=+756.950170005" watchObservedRunningTime="2026-02-02 14:45:55.310106518 +0000 UTC m=+756.954743288" Feb 02 14:45:55 crc kubenswrapper[4869]: I0202 14:45:55.325673 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-sk72x" podStartSLOduration=2.017301066 podStartE2EDuration="8.325648002s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:48.133847242 +0000 UTC m=+749.778484012" lastFinishedPulling="2026-02-02 14:45:54.442194178 +0000 UTC m=+756.086830948" observedRunningTime="2026-02-02 14:45:55.32393878 +0000 UTC m=+756.968575560" watchObservedRunningTime="2026-02-02 14:45:55.325648002 +0000 UTC m=+756.970284772" Feb 02 14:45:57 crc kubenswrapper[4869]: I0202 14:45:57.846642 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:57 crc kubenswrapper[4869]: I0202 14:45:57.847218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:57 crc kubenswrapper[4869]: I0202 14:45:57.853079 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:58 crc kubenswrapper[4869]: I0202 14:45:58.304559 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-865678f777-2fzjm" Feb 02 14:45:58 crc kubenswrapper[4869]: I0202 14:45:58.372119 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:46:01 crc kubenswrapper[4869]: I0202 14:46:01.336055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" event={"ID":"ec9ec105-2660-4787-89f3-5c0fe79e8e97","Type":"ContainerStarted","Data":"9bce820cfabdf958cea11a870204d457ffe3a16ab6a4bccdac0b0902d805f290"} Feb 02 14:46:02 crc kubenswrapper[4869]: I0202 14:46:02.373012 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-647lw" podStartSLOduration=3.594255048 podStartE2EDuration="15.372989796s" podCreationTimestamp="2026-02-02 14:45:47 +0000 UTC" firstStartedPulling="2026-02-02 14:45:48.056008692 +0000 UTC m=+749.700645462" lastFinishedPulling="2026-02-02 14:45:59.83474344 +0000 UTC m=+761.479380210" observedRunningTime="2026-02-02 14:46:02.371838897 +0000 UTC m=+764.016475667" watchObservedRunningTime="2026-02-02 14:46:02.372989796 +0000 UTC m=+764.017626566" Feb 02 14:46:02 crc kubenswrapper[4869]: I0202 14:46:02.521656 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-87g86" Feb 02 14:46:08 crc kubenswrapper[4869]: I0202 14:46:08.066944 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jf287" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.304897 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.305776 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.305865 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.306932 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:46:15 crc kubenswrapper[4869]: I0202 14:46:15.307101 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486" gracePeriod=600 Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460168 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486" exitCode=0 Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486"} Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56"} Feb 02 14:46:16 crc kubenswrapper[4869]: I0202 14:46:16.460996 4869 scope.go:117] "RemoveContainer" containerID="995600ddc71335630e5c7a8db13517e43bb5e0723cca29a04780981f435caaaa" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.416816 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ptmkd" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" containerID="cri-o://4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" gracePeriod=15 Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.875726 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ptmkd_ccaee1bd-fef5-4874-9e96-002a733fd5dc/console/0.log" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.875813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948365 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948489 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948527 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948563 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948650 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.948690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") pod \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\" (UID: \"ccaee1bd-fef5-4874-9e96-002a733fd5dc\") " Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.949845 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config" (OuterVolumeSpecName: "console-config") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950104 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950437 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950470 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950483 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.950599 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca" (OuterVolumeSpecName: "service-ca") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.959139 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.960721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:46:23 crc kubenswrapper[4869]: I0202 14:46:23.961229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf" (OuterVolumeSpecName: "kube-api-access-wbgxf") pod "ccaee1bd-fef5-4874-9e96-002a733fd5dc" (UID: "ccaee1bd-fef5-4874-9e96-002a733fd5dc"). InnerVolumeSpecName "kube-api-access-wbgxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051697 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgxf\" (UniqueName: \"kubernetes.io/projected/ccaee1bd-fef5-4874-9e96-002a733fd5dc-kube-api-access-wbgxf\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051754 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccaee1bd-fef5-4874-9e96-002a733fd5dc-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051765 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.051775 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccaee1bd-fef5-4874-9e96-002a733fd5dc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.349106 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx"] Feb 02 14:46:24 crc kubenswrapper[4869]: E0202 14:46:24.349937 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.349957 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.350116 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerName="console" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.351326 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.354245 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.357490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx"] Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.457691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.457970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.458040 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521430 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ptmkd_ccaee1bd-fef5-4874-9e96-002a733fd5dc/console/0.log" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521508 4869 generic.go:334] "Generic (PLEG): container finished" podID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" exitCode=2 Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521553 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerDied","Data":"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed"} Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ptmkd" event={"ID":"ccaee1bd-fef5-4874-9e96-002a733fd5dc","Type":"ContainerDied","Data":"16f76cd6bf05f6fb4f402ecc35e901805472a099619bf8e10a27be6e93584f89"} Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521637 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ptmkd" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.521637 4869 scope.go:117] "RemoveContainer" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.561384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.561473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.561554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.563983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.565703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.575393 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.580042 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ptmkd"] Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.583791 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.673115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.850274 4869 scope.go:117] "RemoveContainer" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" Feb 02 14:46:24 crc kubenswrapper[4869]: E0202 14:46:24.850948 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed\": container with ID starting with 4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed not found: ID does not exist" containerID="4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed" Feb 02 14:46:24 crc kubenswrapper[4869]: I0202 14:46:24.851013 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed"} err="failed to get container status \"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed\": rpc error: code = NotFound desc = could not find container \"4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed\": container with ID starting with 4ed11cf5bb8811df3774c190a6ed3c25268c89d51f2ad3a7b045ac5bf6dbb7ed not found: ID does not exist" Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.087659 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx"] Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.121349 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.475248 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccaee1bd-fef5-4874-9e96-002a733fd5dc" path="/var/lib/kubelet/pods/ccaee1bd-fef5-4874-9e96-002a733fd5dc/volumes" Feb 02 14:46:25 crc kubenswrapper[4869]: I0202 14:46:25.528602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerStarted","Data":"c817e41703b8fc035a2f1079427307f969158b9aa17b598a01e4601d00e56c10"} Feb 02 14:46:26 crc kubenswrapper[4869]: I0202 14:46:26.536378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerStarted","Data":"f4d481a024f73f0f6b84ffe7965dab28071c8e977b999b0abc73b835eee8dca6"} Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.544176 4869 generic.go:334] "Generic (PLEG): container finished" podID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerID="f4d481a024f73f0f6b84ffe7965dab28071c8e977b999b0abc73b835eee8dca6" exitCode=0 Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.544606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"f4d481a024f73f0f6b84ffe7965dab28071c8e977b999b0abc73b835eee8dca6"} Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.701332 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.703058 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.709865 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.810238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.810296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.810318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911326 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.911965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.912002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:27 crc kubenswrapper[4869]: I0202 14:46:27.942855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"redhat-operators-68hxt\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:28 crc kubenswrapper[4869]: I0202 14:46:28.024725 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:28 crc kubenswrapper[4869]: I0202 14:46:28.305484 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:28 crc kubenswrapper[4869]: W0202 14:46:28.331888 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba11fdd_6b64_41ad_9106_0eda21b92a5a.slice/crio-5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b WatchSource:0}: Error finding container 5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b: Status 404 returned error can't find the container with id 5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b Feb 02 14:46:28 crc kubenswrapper[4869]: I0202 14:46:28.551593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerStarted","Data":"5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b"} Feb 02 14:46:28 crc kubenswrapper[4869]: E0202 14:46:28.724412 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba11fdd_6b64_41ad_9106_0eda21b92a5a.slice/crio-386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba11fdd_6b64_41ad_9106_0eda21b92a5a.slice/crio-conmon-386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4.scope\": RecentStats: unable to find data in memory cache]" Feb 02 14:46:29 crc kubenswrapper[4869]: I0202 14:46:29.560362 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" exitCode=0 Feb 02 14:46:29 crc kubenswrapper[4869]: I0202 14:46:29.560444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4"} Feb 02 14:46:33 crc kubenswrapper[4869]: I0202 14:46:33.588077 4869 generic.go:334] "Generic (PLEG): container finished" podID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerID="63faa96ad77efc4aa0694b4e352025f2e43421a504a4556a806a0f787868c946" exitCode=0 Feb 02 14:46:33 crc kubenswrapper[4869]: I0202 14:46:33.588117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"63faa96ad77efc4aa0694b4e352025f2e43421a504a4556a806a0f787868c946"} Feb 02 14:46:34 crc kubenswrapper[4869]: I0202 14:46:34.598028 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerStarted","Data":"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892"} Feb 02 14:46:34 crc kubenswrapper[4869]: I0202 14:46:34.601639 4869 generic.go:334] "Generic (PLEG): container finished" podID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerID="c9a4318758ed809be7787ae40078b0d811fd88f4892994d6c22c406e0867bbb4" exitCode=0 Feb 02 14:46:34 crc kubenswrapper[4869]: I0202 14:46:34.601704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"c9a4318758ed809be7787ae40078b0d811fd88f4892994d6c22c406e0867bbb4"} Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.610890 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" exitCode=0 Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.611045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892"} Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.882243 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.936329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") pod \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.936418 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") pod \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.936585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") pod \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\" (UID: \"861ed901-c46c-49d9-83ad-aeca9fd3f93b\") " Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.937432 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle" (OuterVolumeSpecName: "bundle") pod "861ed901-c46c-49d9-83ad-aeca9fd3f93b" (UID: "861ed901-c46c-49d9-83ad-aeca9fd3f93b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.942867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc" (OuterVolumeSpecName: "kube-api-access-s9fxc") pod "861ed901-c46c-49d9-83ad-aeca9fd3f93b" (UID: "861ed901-c46c-49d9-83ad-aeca9fd3f93b"). InnerVolumeSpecName "kube-api-access-s9fxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:46:35 crc kubenswrapper[4869]: I0202 14:46:35.949146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util" (OuterVolumeSpecName: "util") pod "861ed901-c46c-49d9-83ad-aeca9fd3f93b" (UID: "861ed901-c46c-49d9-83ad-aeca9fd3f93b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.038098 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-util\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.038155 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/861ed901-c46c-49d9-83ad-aeca9fd3f93b-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.038165 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fxc\" (UniqueName: \"kubernetes.io/projected/861ed901-c46c-49d9-83ad-aeca9fd3f93b-kube-api-access-s9fxc\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.622031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" event={"ID":"861ed901-c46c-49d9-83ad-aeca9fd3f93b","Type":"ContainerDied","Data":"c817e41703b8fc035a2f1079427307f969158b9aa17b598a01e4601d00e56c10"} Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.622102 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c817e41703b8fc035a2f1079427307f969158b9aa17b598a01e4601d00e56c10" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.622139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx" Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.624828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerStarted","Data":"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd"} Feb 02 14:46:36 crc kubenswrapper[4869]: I0202 14:46:36.663185 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68hxt" podStartSLOduration=3.080218844 podStartE2EDuration="9.663156234s" podCreationTimestamp="2026-02-02 14:46:27 +0000 UTC" firstStartedPulling="2026-02-02 14:46:29.562265743 +0000 UTC m=+791.206902513" lastFinishedPulling="2026-02-02 14:46:36.145203133 +0000 UTC m=+797.789839903" observedRunningTime="2026-02-02 14:46:36.652999772 +0000 UTC m=+798.297636542" watchObservedRunningTime="2026-02-02 14:46:36.663156234 +0000 UTC m=+798.307793004" Feb 02 14:46:38 crc kubenswrapper[4869]: I0202 14:46:38.025262 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:38 crc kubenswrapper[4869]: I0202 14:46:38.026579 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:39 crc kubenswrapper[4869]: I0202 14:46:39.078370 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68hxt" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" probeResult="failure" output=< Feb 02 14:46:39 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 14:46:39 crc kubenswrapper[4869]: > Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.725155 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p"] Feb 02 14:46:47 crc kubenswrapper[4869]: E0202 14:46:47.726433 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="pull" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726454 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="pull" Feb 02 14:46:47 crc kubenswrapper[4869]: E0202 14:46:47.726470 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="extract" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726478 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="extract" Feb 02 14:46:47 crc kubenswrapper[4869]: E0202 14:46:47.726494 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="util" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726502 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="util" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.726641 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="861ed901-c46c-49d9-83ad-aeca9fd3f93b" containerName="extract" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.727282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.729892 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.730080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pdfd4" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.730273 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.731428 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.732997 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.748618 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p"] Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.832116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2vt\" (UniqueName: \"kubernetes.io/projected/7a0708ec-3eb5-4515-adf0-e36c732da54e-kube-api-access-vh2vt\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.832197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-apiservice-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.832231 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-webhook-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.933273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2vt\" (UniqueName: \"kubernetes.io/projected/7a0708ec-3eb5-4515-adf0-e36c732da54e-kube-api-access-vh2vt\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.933341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-apiservice-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.933362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-webhook-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.941604 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-apiservice-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.949716 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a0708ec-3eb5-4515-adf0-e36c732da54e-webhook-cert\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:47 crc kubenswrapper[4869]: I0202 14:46:47.954174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2vt\" (UniqueName: \"kubernetes.io/projected/7a0708ec-3eb5-4515-adf0-e36c732da54e-kube-api-access-vh2vt\") pod \"metallb-operator-controller-manager-6b74bd8485-6rx7p\" (UID: \"7a0708ec-3eb5-4515-adf0-e36c732da54e\") " pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.063123 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69b678c656-9prhr"] Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.064024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.067415 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.068634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6gg9v" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.070099 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.080328 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.089762 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69b678c656-9prhr"] Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.120956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.140416 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-apiservice-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.140476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-webhook-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.140526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp599\" (UniqueName: \"kubernetes.io/projected/322f75dd-f952-451d-b505-400b173b382c-kube-api-access-gp599\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.218292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.243250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-webhook-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.243925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp599\" (UniqueName: \"kubernetes.io/projected/322f75dd-f952-451d-b505-400b173b382c-kube-api-access-gp599\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.244553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-apiservice-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.250746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-apiservice-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.266335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/322f75dd-f952-451d-b505-400b173b382c-webhook-cert\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.301182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp599\" (UniqueName: \"kubernetes.io/projected/322f75dd-f952-451d-b505-400b173b382c-kube-api-access-gp599\") pod \"metallb-operator-webhook-server-69b678c656-9prhr\" (UID: \"322f75dd-f952-451d-b505-400b173b382c\") " pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.384313 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.555942 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p"] Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.714040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" event={"ID":"7a0708ec-3eb5-4515-adf0-e36c732da54e","Type":"ContainerStarted","Data":"68c346e18c5d1bd57b9cd380e7e7089ecfcc535d6384dbd95b433e49e0f388f6"} Feb 02 14:46:48 crc kubenswrapper[4869]: I0202 14:46:48.746366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69b678c656-9prhr"] Feb 02 14:46:48 crc kubenswrapper[4869]: W0202 14:46:48.765417 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod322f75dd_f952_451d_b505_400b173b382c.slice/crio-ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14 WatchSource:0}: Error finding container ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14: Status 404 returned error can't find the container with id ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14 Feb 02 14:46:49 crc kubenswrapper[4869]: I0202 14:46:49.721712 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" event={"ID":"322f75dd-f952-451d-b505-400b173b382c","Type":"ContainerStarted","Data":"ee4b30362594d16cfe5147b688410c40d6c7baba258434fb9169b7b947078e14"} Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.081205 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.081535 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68hxt" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" containerID="cri-o://412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" gracePeriod=2 Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.567354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.696162 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") pod \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.701082 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") pod \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.701139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") pod \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\" (UID: \"1ba11fdd-6b64-41ad-9106-0eda21b92a5a\") " Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.702283 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities" (OuterVolumeSpecName: "utilities") pod "1ba11fdd-6b64-41ad-9106-0eda21b92a5a" (UID: "1ba11fdd-6b64-41ad-9106-0eda21b92a5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.723217 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767" (OuterVolumeSpecName: "kube-api-access-h4767") pod "1ba11fdd-6b64-41ad-9106-0eda21b92a5a" (UID: "1ba11fdd-6b64-41ad-9106-0eda21b92a5a"). InnerVolumeSpecName "kube-api-access-h4767". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738742 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" exitCode=0 Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd"} Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738933 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68hxt" event={"ID":"1ba11fdd-6b64-41ad-9106-0eda21b92a5a","Type":"ContainerDied","Data":"5912e4ed59a5338422ca4c89d1022257f436ebab193be3d88e7ab40cdf02a72b"} Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.738963 4869 scope.go:117] "RemoveContainer" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.739059 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68hxt" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.802814 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4767\" (UniqueName: \"kubernetes.io/projected/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-kube-api-access-h4767\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.802865 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.875638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ba11fdd-6b64-41ad-9106-0eda21b92a5a" (UID: "1ba11fdd-6b64-41ad-9106-0eda21b92a5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:46:50 crc kubenswrapper[4869]: I0202 14:46:50.904324 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba11fdd-6b64-41ad-9106-0eda21b92a5a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:46:51 crc kubenswrapper[4869]: I0202 14:46:51.078527 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:51 crc kubenswrapper[4869]: I0202 14:46:51.083665 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68hxt"] Feb 02 14:46:51 crc kubenswrapper[4869]: I0202 14:46:51.473964 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" path="/var/lib/kubelet/pods/1ba11fdd-6b64-41ad-9106-0eda21b92a5a/volumes" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.079265 4869 scope.go:117] "RemoveContainer" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.103241 4869 scope.go:117] "RemoveContainer" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.133925 4869 scope.go:117] "RemoveContainer" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" Feb 02 14:46:52 crc kubenswrapper[4869]: E0202 14:46:52.135628 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd\": container with ID starting with 412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd not found: ID does not exist" containerID="412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.135827 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd"} err="failed to get container status \"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd\": rpc error: code = NotFound desc = could not find container \"412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd\": container with ID starting with 412e61a8fa748784ed8e818aa159f413b7651559450e09cd72743c3b5b4a4ddd not found: ID does not exist" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.136004 4869 scope.go:117] "RemoveContainer" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" Feb 02 14:46:52 crc kubenswrapper[4869]: E0202 14:46:52.136844 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892\": container with ID starting with a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892 not found: ID does not exist" containerID="a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.136903 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892"} err="failed to get container status \"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892\": rpc error: code = NotFound desc = could not find container \"a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892\": container with ID starting with a4091fe1ab3848e374edbbf5412d5eadd1698974c205286262cb335e92493892 not found: ID does not exist" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.136981 4869 scope.go:117] "RemoveContainer" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" Feb 02 14:46:52 crc kubenswrapper[4869]: E0202 14:46:52.137459 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4\": container with ID starting with 386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4 not found: ID does not exist" containerID="386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4" Feb 02 14:46:52 crc kubenswrapper[4869]: I0202 14:46:52.137538 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4"} err="failed to get container status \"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4\": rpc error: code = NotFound desc = could not find container \"386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4\": container with ID starting with 386c2b76aabe1f366cf346e215bc927d51c0e38b410ae694af8873dc90558df4 not found: ID does not exist" Feb 02 14:46:55 crc kubenswrapper[4869]: I0202 14:46:55.788100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" event={"ID":"7a0708ec-3eb5-4515-adf0-e36c732da54e","Type":"ContainerStarted","Data":"8f899c60dacec5159f394efda1af763411c50d72d4cb2359d84cfdc989055fdb"} Feb 02 14:46:55 crc kubenswrapper[4869]: I0202 14:46:55.788826 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:46:55 crc kubenswrapper[4869]: I0202 14:46:55.813653 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" podStartSLOduration=2.835360931 podStartE2EDuration="8.813623569s" podCreationTimestamp="2026-02-02 14:46:47 +0000 UTC" firstStartedPulling="2026-02-02 14:46:48.569724747 +0000 UTC m=+810.214361517" lastFinishedPulling="2026-02-02 14:46:54.547987385 +0000 UTC m=+816.192624155" observedRunningTime="2026-02-02 14:46:55.811836945 +0000 UTC m=+817.456473715" watchObservedRunningTime="2026-02-02 14:46:55.813623569 +0000 UTC m=+817.458260329" Feb 02 14:46:56 crc kubenswrapper[4869]: I0202 14:46:56.797209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" event={"ID":"322f75dd-f952-451d-b505-400b173b382c","Type":"ContainerStarted","Data":"04b0a0f2a1283c9d50cb479ef5acca4afdeb272896bf39d7368f676a48ea372a"} Feb 02 14:46:56 crc kubenswrapper[4869]: I0202 14:46:56.798082 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:46:56 crc kubenswrapper[4869]: I0202 14:46:56.826843 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" podStartSLOduration=1.529894039 podStartE2EDuration="8.826811899s" podCreationTimestamp="2026-02-02 14:46:48 +0000 UTC" firstStartedPulling="2026-02-02 14:46:48.768927027 +0000 UTC m=+810.413563797" lastFinishedPulling="2026-02-02 14:46:56.065844887 +0000 UTC m=+817.710481657" observedRunningTime="2026-02-02 14:46:56.82280542 +0000 UTC m=+818.467442210" watchObservedRunningTime="2026-02-02 14:46:56.826811899 +0000 UTC m=+818.471448669" Feb 02 14:47:08 crc kubenswrapper[4869]: I0202 14:47:08.391218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69b678c656-9prhr" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.084298 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b74bd8485-6rx7p" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.808601 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jrfvv"] Feb 02 14:47:28 crc kubenswrapper[4869]: E0202 14:47:28.815318 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" Feb 02 14:47:28 crc kubenswrapper[4869]: E0202 14:47:28.815403 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-utilities" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815412 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-utilities" Feb 02 14:47:28 crc kubenswrapper[4869]: E0202 14:47:28.815430 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-content" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815439 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="extract-content" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.815689 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba11fdd-6b64-41ad-9106-0eda21b92a5a" containerName="registry-server" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.818208 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.818931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.819552 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.825263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.825327 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.825627 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-69bkb" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.826250 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.838393 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.932999 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qkkx4"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.934396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.940414 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.940792 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.940895 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.941111 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mmdlj" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.949524 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-45hcg"] Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.950863 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.952591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 14:47:28 crc kubenswrapper[4869]: I0202 14:47:28.968263 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-45hcg"] Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-reloader\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-startup\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jmd\" (UniqueName: \"kubernetes.io/projected/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-kube-api-access-q8jmd\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdshz\" (UniqueName: \"kubernetes.io/projected/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-kube-api-access-qdshz\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-conf\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000885 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.000973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-sockets\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z6nl\" (UniqueName: \"kubernetes.io/projected/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-kube-api-access-2z6nl\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics-certs\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.001289 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-cert\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.101902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-cert\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdct\" (UniqueName: \"kubernetes.io/projected/131f6807-e412-436c-8271-86f09259ae74-kube-api-access-bbdct\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-reloader\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102115 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-startup\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102141 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jmd\" (UniqueName: \"kubernetes.io/projected/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-kube-api-access-q8jmd\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102180 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdshz\" (UniqueName: \"kubernetes.io/projected/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-kube-api-access-qdshz\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-conf\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102276 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-sockets\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102407 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z6nl\" (UniqueName: \"kubernetes.io/projected/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-kube-api-access-2z6nl\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102436 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/131f6807-e412-436c-8271-86f09259ae74-metallb-excludel2\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.102507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics-certs\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.103005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-conf\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.103064 4869 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.103325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-reloader\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.103607 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs podName:fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:29.603497616 +0000 UTC m=+851.248134576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs") pod "controller-6968d8fdc4-45hcg" (UID: "fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188") : secret "controller-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.104600 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-startup\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.105010 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.105811 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.106131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-frr-sockets\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.112356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-metrics-certs\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.119263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-cert\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.124263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.124414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdshz\" (UniqueName: \"kubernetes.io/projected/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-kube-api-access-qdshz\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.128655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z6nl\" (UniqueName: \"kubernetes.io/projected/4c02ed66-22a0-4bd3-b10b-8dbf872aac9d-kube-api-access-2z6nl\") pod \"frr-k8s-jrfvv\" (UID: \"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d\") " pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.140948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jmd\" (UniqueName: \"kubernetes.io/projected/d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c-kube-api-access-q8jmd\") pod \"frr-k8s-webhook-server-7df86c4f6c-2v777\" (UID: \"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.147628 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.163080 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbdct\" (UniqueName: \"kubernetes.io/projected/131f6807-e412-436c-8271-86f09259ae74-kube-api-access-bbdct\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.204770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/131f6807-e412-436c-8271-86f09259ae74-metallb-excludel2\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.204835 4869 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.204977 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs podName:131f6807-e412-436c-8271-86f09259ae74 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:29.70494291 +0000 UTC m=+851.349579680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs") pod "speaker-qkkx4" (UID: "131f6807-e412-436c-8271-86f09259ae74") : secret "speaker-certs-secret" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.205288 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.205422 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist podName:131f6807-e412-436c-8271-86f09259ae74 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:29.705394691 +0000 UTC m=+851.350031661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist") pod "speaker-qkkx4" (UID: "131f6807-e412-436c-8271-86f09259ae74") : secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.207428 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/131f6807-e412-436c-8271-86f09259ae74-metallb-excludel2\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.227926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbdct\" (UniqueName: \"kubernetes.io/projected/131f6807-e412-436c-8271-86f09259ae74-kube-api-access-bbdct\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.442205 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777"] Feb 02 14:47:29 crc kubenswrapper[4869]: W0202 14:47:29.452410 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd389ca1e_a7e0_4a90_ae8a_f4d760b1ab1c.slice/crio-bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c WatchSource:0}: Error finding container bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c: Status 404 returned error can't find the container with id bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.613757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.624693 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188-metrics-certs\") pod \"controller-6968d8fdc4-45hcg\" (UID: \"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188\") " pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.716083 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.716252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.716338 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: E0202 14:47:29.716490 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist podName:131f6807-e412-436c-8271-86f09259ae74 nodeName:}" failed. No retries permitted until 2026-02-02 14:47:30.716464527 +0000 UTC m=+852.361101297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist") pod "speaker-qkkx4" (UID: "131f6807-e412-436c-8271-86f09259ae74") : secret "metallb-memberlist" not found Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.735099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-metrics-certs\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:29 crc kubenswrapper[4869]: I0202 14:47:29.901138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.033988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"4be9a11b4f47d48af15104dac4c9951616657a8e24ee88d0dbe4177eb1125173"} Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.038636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" event={"ID":"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c","Type":"ContainerStarted","Data":"bd844187c2d99ee5744ea259ed284625cd3e4a469a6f0864d02a234e2644e10c"} Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.157310 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-45hcg"] Feb 02 14:47:30 crc kubenswrapper[4869]: W0202 14:47:30.168602 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb7d0f1f_ea38_4756_b1fa_5fba1cc1a188.slice/crio-63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931 WatchSource:0}: Error finding container 63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931: Status 404 returned error can't find the container with id 63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931 Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.743529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.761169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/131f6807-e412-436c-8271-86f09259ae74-memberlist\") pod \"speaker-qkkx4\" (UID: \"131f6807-e412-436c-8271-86f09259ae74\") " pod="metallb-system/speaker-qkkx4" Feb 02 14:47:30 crc kubenswrapper[4869]: I0202 14:47:30.787057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:30 crc kubenswrapper[4869]: W0202 14:47:30.852265 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131f6807_e412_436c_8271_86f09259ae74.slice/crio-03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88 WatchSource:0}: Error finding container 03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88: Status 404 returned error can't find the container with id 03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88 Feb 02 14:47:31 crc kubenswrapper[4869]: I0202 14:47:31.059429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-45hcg" event={"ID":"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188","Type":"ContainerStarted","Data":"00642b31af6a0d04cad645260ade532717bb2a1142bfe032bed0eb570ce64210"} Feb 02 14:47:31 crc kubenswrapper[4869]: I0202 14:47:31.059507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-45hcg" event={"ID":"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188","Type":"ContainerStarted","Data":"63d1b29a0db49d6b0b7833b543e77719c9fb380dabce864f0f4707e6c48f7931"} Feb 02 14:47:31 crc kubenswrapper[4869]: I0202 14:47:31.061155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qkkx4" event={"ID":"131f6807-e412-436c-8271-86f09259ae74","Type":"ContainerStarted","Data":"03cf88c3548d7f0fa934e2969a96b4fffa22c5c2223788e653ca02547a96df88"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.082509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qkkx4" event={"ID":"131f6807-e412-436c-8271-86f09259ae74","Type":"ContainerStarted","Data":"074ebe04aab5f18b86421f3553ba4f1b66f1b7c8c1b2cf7b2ff5980580c4ad8f"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.083025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qkkx4" event={"ID":"131f6807-e412-436c-8271-86f09259ae74","Type":"ContainerStarted","Data":"6e799a6f5ff21b0680fde73130a9ad0f1e73506fcfdf54e14761a395bf73792f"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.084376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.086350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-45hcg" event={"ID":"fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188","Type":"ContainerStarted","Data":"769b275e637f6ab07ba74b759f6913ff9252bcd410d7484a20f676eb104d15ce"} Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.086975 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.118466 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qkkx4" podStartSLOduration=4.118441095 podStartE2EDuration="4.118441095s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:47:32.113152584 +0000 UTC m=+853.757789364" watchObservedRunningTime="2026-02-02 14:47:32.118441095 +0000 UTC m=+853.763077875" Feb 02 14:47:32 crc kubenswrapper[4869]: I0202 14:47:32.138418 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-45hcg" podStartSLOduration=4.13839152 podStartE2EDuration="4.13839152s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:47:32.136479442 +0000 UTC m=+853.781116212" watchObservedRunningTime="2026-02-02 14:47:32.13839152 +0000 UTC m=+853.783028290" Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.202706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" event={"ID":"d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c","Type":"ContainerStarted","Data":"3a9fcbc52cad7510cb70dd987494f6397abfffdfd750c71c7ebb5e5e38ee0c88"} Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.203534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.206677 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c02ed66-22a0-4bd3-b10b-8dbf872aac9d" containerID="1fa6f83a598986d828dad7af3c1b8fb05cc86b744229126c509170bfb725ed2a" exitCode=0 Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.206785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerDied","Data":"1fa6f83a598986d828dad7af3c1b8fb05cc86b744229126c509170bfb725ed2a"} Feb 02 14:47:40 crc kubenswrapper[4869]: I0202 14:47:40.261451 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" podStartSLOduration=2.577562387 podStartE2EDuration="12.261420512s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="2026-02-02 14:47:29.455161261 +0000 UTC m=+851.099798041" lastFinishedPulling="2026-02-02 14:47:39.139019396 +0000 UTC m=+860.783656166" observedRunningTime="2026-02-02 14:47:40.222305393 +0000 UTC m=+861.866942173" watchObservedRunningTime="2026-02-02 14:47:40.261420512 +0000 UTC m=+861.906057282" Feb 02 14:47:41 crc kubenswrapper[4869]: I0202 14:47:41.217072 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c02ed66-22a0-4bd3-b10b-8dbf872aac9d" containerID="3b4a0df8763afebb1c377d1f4234d7e5f4ab5bfd96c2454f3d31647c7d282221" exitCode=0 Feb 02 14:47:41 crc kubenswrapper[4869]: I0202 14:47:41.217179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerDied","Data":"3b4a0df8763afebb1c377d1f4234d7e5f4ab5bfd96c2454f3d31647c7d282221"} Feb 02 14:47:42 crc kubenswrapper[4869]: I0202 14:47:42.226739 4869 generic.go:334] "Generic (PLEG): container finished" podID="4c02ed66-22a0-4bd3-b10b-8dbf872aac9d" containerID="d76f1ba917db524b828f430cdf069445b7b05471641b2c36ea8fbe07ddc380b9" exitCode=0 Feb 02 14:47:42 crc kubenswrapper[4869]: I0202 14:47:42.226803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerDied","Data":"d76f1ba917db524b828f430cdf069445b7b05471641b2c36ea8fbe07ddc380b9"} Feb 02 14:47:43 crc kubenswrapper[4869]: I0202 14:47:43.237410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"da2535e0141c2157dbef7093fed584254d5d234146c1c0b6f1ae2361e87b76f8"} Feb 02 14:47:43 crc kubenswrapper[4869]: I0202 14:47:43.238262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"71595c7fcd4f46a3f64b2f9ec09d35f68c5ef947592469fc2fa24c2fbd7ca480"} Feb 02 14:47:43 crc kubenswrapper[4869]: I0202 14:47:43.238279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"d98c315bdc13f1a58a1254bd61d0a1bb4d1abaab149127f6d0319e5de022553e"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252310 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"485cf16872161489184c324a1394499c5acc4ffe32b9734cdf6e654da673fe76"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252813 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"408d47df808c819c14ef45dd47d4aa75d69381ab1e7b60e8157b7a9a7c780529"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.252853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jrfvv" event={"ID":"4c02ed66-22a0-4bd3-b10b-8dbf872aac9d","Type":"ContainerStarted","Data":"faa2b27d97c863017eb6e7fca4e94e918076e5240ccd4c276c074c6c7641d161"} Feb 02 14:47:44 crc kubenswrapper[4869]: I0202 14:47:44.283080 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jrfvv" podStartSLOduration=6.522341301 podStartE2EDuration="16.283063301s" podCreationTimestamp="2026-02-02 14:47:28 +0000 UTC" firstStartedPulling="2026-02-02 14:47:29.35988299 +0000 UTC m=+851.004519760" lastFinishedPulling="2026-02-02 14:47:39.12060499 +0000 UTC m=+860.765241760" observedRunningTime="2026-02-02 14:47:44.278468187 +0000 UTC m=+865.923104967" watchObservedRunningTime="2026-02-02 14:47:44.283063301 +0000 UTC m=+865.927700071" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.153104 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-2v777" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.163416 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.214754 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:49 crc kubenswrapper[4869]: I0202 14:47:49.906127 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-45hcg" Feb 02 14:47:50 crc kubenswrapper[4869]: I0202 14:47:50.790734 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qkkx4" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.546589 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.547776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.550571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8vzf2" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.551023 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.551239 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.565485 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.719752 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"openstack-operator-index-r4p87\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.821515 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"openstack-operator-index-r4p87\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.845414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"openstack-operator-index-r4p87\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:53 crc kubenswrapper[4869]: I0202 14:47:53.875971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:54 crc kubenswrapper[4869]: I0202 14:47:54.096189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:54 crc kubenswrapper[4869]: I0202 14:47:54.352829 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerStarted","Data":"918bcb6635635cce2f73c2bf4aec94e06042c6edd128095de0e0218ebcac74d2"} Feb 02 14:47:56 crc kubenswrapper[4869]: I0202 14:47:56.919400 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.379531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerStarted","Data":"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9"} Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.399738 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r4p87" podStartSLOduration=2.25181536 podStartE2EDuration="4.399716131s" podCreationTimestamp="2026-02-02 14:47:53 +0000 UTC" firstStartedPulling="2026-02-02 14:47:54.109045069 +0000 UTC m=+875.753681839" lastFinishedPulling="2026-02-02 14:47:56.25694584 +0000 UTC m=+877.901582610" observedRunningTime="2026-02-02 14:47:57.395841425 +0000 UTC m=+879.040478205" watchObservedRunningTime="2026-02-02 14:47:57.399716131 +0000 UTC m=+879.044352901" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.521755 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g2t6v"] Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.522770 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.542320 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g2t6v"] Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.685952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hp9d\" (UniqueName: \"kubernetes.io/projected/39ba26b8-85bb-43c8-80cb-c9523ba9cac7-kube-api-access-4hp9d\") pod \"openstack-operator-index-g2t6v\" (UID: \"39ba26b8-85bb-43c8-80cb-c9523ba9cac7\") " pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.787561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hp9d\" (UniqueName: \"kubernetes.io/projected/39ba26b8-85bb-43c8-80cb-c9523ba9cac7-kube-api-access-4hp9d\") pod \"openstack-operator-index-g2t6v\" (UID: \"39ba26b8-85bb-43c8-80cb-c9523ba9cac7\") " pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.815568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hp9d\" (UniqueName: \"kubernetes.io/projected/39ba26b8-85bb-43c8-80cb-c9523ba9cac7-kube-api-access-4hp9d\") pod \"openstack-operator-index-g2t6v\" (UID: \"39ba26b8-85bb-43c8-80cb-c9523ba9cac7\") " pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:57 crc kubenswrapper[4869]: I0202 14:47:57.871239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.305539 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g2t6v"] Feb 02 14:47:58 crc kubenswrapper[4869]: W0202 14:47:58.320072 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39ba26b8_85bb_43c8_80cb_c9523ba9cac7.slice/crio-83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647 WatchSource:0}: Error finding container 83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647: Status 404 returned error can't find the container with id 83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647 Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.388184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g2t6v" event={"ID":"39ba26b8-85bb-43c8-80cb-c9523ba9cac7","Type":"ContainerStarted","Data":"83e6220a04eef0d13cb0f1e66c28b59ea6b9eed9e078e7956720c9f2c22f2647"} Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.388348 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-r4p87" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" containerID="cri-o://c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" gracePeriod=2 Feb 02 14:47:58 crc kubenswrapper[4869]: I0202 14:47:58.933559 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.024993 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") pod \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\" (UID: \"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1\") " Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.043396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s" (OuterVolumeSpecName: "kube-api-access-bnj2s") pod "a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" (UID: "a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1"). InnerVolumeSpecName "kube-api-access-bnj2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.126852 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnj2s\" (UniqueName: \"kubernetes.io/projected/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1-kube-api-access-bnj2s\") on node \"crc\" DevicePath \"\"" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.167292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jrfvv" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.398781 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" exitCode=0 Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.398858 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4p87" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.398902 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerDied","Data":"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9"} Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.399006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4p87" event={"ID":"a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1","Type":"ContainerDied","Data":"918bcb6635635cce2f73c2bf4aec94e06042c6edd128095de0e0218ebcac74d2"} Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.399041 4869 scope.go:117] "RemoveContainer" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.401104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g2t6v" event={"ID":"39ba26b8-85bb-43c8-80cb-c9523ba9cac7","Type":"ContainerStarted","Data":"a0b9a3526aed27a96592bba14976a41a65dfdb4702fa4415184f8d02c078df0f"} Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.417551 4869 scope.go:117] "RemoveContainer" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" Feb 02 14:47:59 crc kubenswrapper[4869]: E0202 14:47:59.418784 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9\": container with ID starting with c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9 not found: ID does not exist" containerID="c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.418832 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9"} err="failed to get container status \"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9\": rpc error: code = NotFound desc = could not find container \"c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9\": container with ID starting with c864c6cd24af0f3befa80487f3ce3b3d880749cfc6e9c9b4dbc3400f4b82daa9 not found: ID does not exist" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.439317 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g2t6v" podStartSLOduration=2.116625029 podStartE2EDuration="2.439286607s" podCreationTimestamp="2026-02-02 14:47:57 +0000 UTC" firstStartedPulling="2026-02-02 14:47:58.326237502 +0000 UTC m=+879.970874262" lastFinishedPulling="2026-02-02 14:47:58.64889907 +0000 UTC m=+880.293535840" observedRunningTime="2026-02-02 14:47:59.431536806 +0000 UTC m=+881.076173576" watchObservedRunningTime="2026-02-02 14:47:59.439286607 +0000 UTC m=+881.083923377" Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.450392 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.459348 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-r4p87"] Feb 02 14:47:59 crc kubenswrapper[4869]: I0202 14:47:59.478844 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" path="/var/lib/kubelet/pods/a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1/volumes" Feb 02 14:48:07 crc kubenswrapper[4869]: I0202 14:48:07.871956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:07 crc kubenswrapper[4869]: I0202 14:48:07.872455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:07 crc kubenswrapper[4869]: I0202 14:48:07.904687 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:08 crc kubenswrapper[4869]: I0202 14:48:08.493617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g2t6v" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.819984 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn"] Feb 02 14:48:16 crc kubenswrapper[4869]: E0202 14:48:16.821044 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.821061 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.821193 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d40068-8ce9-4eb6-90b5-ecac4f9e9cd1" containerName="registry-server" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.822314 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.825788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-28g5k" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.837624 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn"] Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.909185 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.909287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:16 crc kubenswrapper[4869]: I0202 14:48:16.909338 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.010966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.011314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.011864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.012053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.012057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.041475 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.140314 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:17 crc kubenswrapper[4869]: I0202 14:48:17.610936 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn"] Feb 02 14:48:17 crc kubenswrapper[4869]: W0202 14:48:17.621244 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode74d3905_6954_4c65_9cd2_d44a638ef83f.slice/crio-e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e WatchSource:0}: Error finding container e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e: Status 404 returned error can't find the container with id e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e Feb 02 14:48:18 crc kubenswrapper[4869]: I0202 14:48:18.538931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerStarted","Data":"e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e"} Feb 02 14:48:19 crc kubenswrapper[4869]: I0202 14:48:19.566321 4869 generic.go:334] "Generic (PLEG): container finished" podID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerID="ea2139921f41fa3e67ecddf9456cf45518c101b96748c442670311f452886063" exitCode=0 Feb 02 14:48:19 crc kubenswrapper[4869]: I0202 14:48:19.566449 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"ea2139921f41fa3e67ecddf9456cf45518c101b96748c442670311f452886063"} Feb 02 14:48:22 crc kubenswrapper[4869]: I0202 14:48:22.590057 4869 generic.go:334] "Generic (PLEG): container finished" podID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerID="c2f9b38b8211f1db1256483feb7abaa9a5e851d481d7ab79d536571be73a4836" exitCode=0 Feb 02 14:48:22 crc kubenswrapper[4869]: I0202 14:48:22.590188 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"c2f9b38b8211f1db1256483feb7abaa9a5e851d481d7ab79d536571be73a4836"} Feb 02 14:48:23 crc kubenswrapper[4869]: I0202 14:48:23.610294 4869 generic.go:334] "Generic (PLEG): container finished" podID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerID="bc04c443a7d5bfbfa579e22c993f6e1206879fa1bd3a48122d921a7fb485305c" exitCode=0 Feb 02 14:48:23 crc kubenswrapper[4869]: I0202 14:48:23.610360 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"bc04c443a7d5bfbfa579e22c993f6e1206879fa1bd3a48122d921a7fb485305c"} Feb 02 14:48:24 crc kubenswrapper[4869]: I0202 14:48:24.891798 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.045731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") pod \"e74d3905-6954-4c65-9cd2-d44a638ef83f\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.045827 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") pod \"e74d3905-6954-4c65-9cd2-d44a638ef83f\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.046882 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle" (OuterVolumeSpecName: "bundle") pod "e74d3905-6954-4c65-9cd2-d44a638ef83f" (UID: "e74d3905-6954-4c65-9cd2-d44a638ef83f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.047010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") pod \"e74d3905-6954-4c65-9cd2-d44a638ef83f\" (UID: \"e74d3905-6954-4c65-9cd2-d44a638ef83f\") " Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.047447 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.060225 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2" (OuterVolumeSpecName: "kube-api-access-n6jf2") pod "e74d3905-6954-4c65-9cd2-d44a638ef83f" (UID: "e74d3905-6954-4c65-9cd2-d44a638ef83f"). InnerVolumeSpecName "kube-api-access-n6jf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.061513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util" (OuterVolumeSpecName: "util") pod "e74d3905-6954-4c65-9cd2-d44a638ef83f" (UID: "e74d3905-6954-4c65-9cd2-d44a638ef83f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.149659 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e74d3905-6954-4c65-9cd2-d44a638ef83f-util\") on node \"crc\" DevicePath \"\"" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.149711 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6jf2\" (UniqueName: \"kubernetes.io/projected/e74d3905-6954-4c65-9cd2-d44a638ef83f-kube-api-access-n6jf2\") on node \"crc\" DevicePath \"\"" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.626408 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" event={"ID":"e74d3905-6954-4c65-9cd2-d44a638ef83f","Type":"ContainerDied","Data":"e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e"} Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.626476 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0831cd14efae57738f11c30ec556c6bad228433b3e74c850e52bb2c88c5e55e" Feb 02 14:48:25 crc kubenswrapper[4869]: I0202 14:48:25.626527 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.764698 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz"] Feb 02 14:48:28 crc kubenswrapper[4869]: E0202 14:48:28.765664 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="extract" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765689 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="extract" Feb 02 14:48:28 crc kubenswrapper[4869]: E0202 14:48:28.765708 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="pull" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765717 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="pull" Feb 02 14:48:28 crc kubenswrapper[4869]: E0202 14:48:28.765739 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="util" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765749 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="util" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.765896 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74d3905-6954-4c65-9cd2-d44a638ef83f" containerName="extract" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.766556 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.769288 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sck9p" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.809353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxpd\" (UniqueName: \"kubernetes.io/projected/61702985-b65f-4603-9960-3a455bf05c9e-kube-api-access-crxpd\") pod \"openstack-operator-controller-init-5d75b9d66c-jsstz\" (UID: \"61702985-b65f-4603-9960-3a455bf05c9e\") " pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.810994 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz"] Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.910192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxpd\" (UniqueName: \"kubernetes.io/projected/61702985-b65f-4603-9960-3a455bf05c9e-kube-api-access-crxpd\") pod \"openstack-operator-controller-init-5d75b9d66c-jsstz\" (UID: \"61702985-b65f-4603-9960-3a455bf05c9e\") " pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:28 crc kubenswrapper[4869]: I0202 14:48:28.935784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxpd\" (UniqueName: \"kubernetes.io/projected/61702985-b65f-4603-9960-3a455bf05c9e-kube-api-access-crxpd\") pod \"openstack-operator-controller-init-5d75b9d66c-jsstz\" (UID: \"61702985-b65f-4603-9960-3a455bf05c9e\") " pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:29 crc kubenswrapper[4869]: I0202 14:48:29.091522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:29 crc kubenswrapper[4869]: I0202 14:48:29.564821 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz"] Feb 02 14:48:29 crc kubenswrapper[4869]: W0202 14:48:29.579325 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61702985_b65f_4603_9960_3a455bf05c9e.slice/crio-4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947 WatchSource:0}: Error finding container 4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947: Status 404 returned error can't find the container with id 4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947 Feb 02 14:48:29 crc kubenswrapper[4869]: I0202 14:48:29.654707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" event={"ID":"61702985-b65f-4603-9960-3a455bf05c9e","Type":"ContainerStarted","Data":"4ea14a04c110727b458ff37019a86f3d5a4313c3d96f5116fc154d500be5d947"} Feb 02 14:48:39 crc kubenswrapper[4869]: I0202 14:48:39.759416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" event={"ID":"61702985-b65f-4603-9960-3a455bf05c9e","Type":"ContainerStarted","Data":"49f24e968bce5445f5d8ed8f6f8ecda6263188dd37d57f4f253324e55685c4a5"} Feb 02 14:48:39 crc kubenswrapper[4869]: I0202 14:48:39.760484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:39 crc kubenswrapper[4869]: I0202 14:48:39.797114 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" podStartSLOduration=2.161908066 podStartE2EDuration="11.797042723s" podCreationTimestamp="2026-02-02 14:48:28 +0000 UTC" firstStartedPulling="2026-02-02 14:48:29.582111716 +0000 UTC m=+911.226748486" lastFinishedPulling="2026-02-02 14:48:39.217246353 +0000 UTC m=+920.861883143" observedRunningTime="2026-02-02 14:48:39.791587578 +0000 UTC m=+921.436224358" watchObservedRunningTime="2026-02-02 14:48:39.797042723 +0000 UTC m=+921.441679513" Feb 02 14:48:45 crc kubenswrapper[4869]: I0202 14:48:45.304772 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:48:45 crc kubenswrapper[4869]: I0202 14:48:45.307047 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:48:49 crc kubenswrapper[4869]: I0202 14:48:49.095941 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5d75b9d66c-jsstz" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.851002 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.875748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.893451 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.913634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.913812 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:51 crc kubenswrapper[4869]: I0202 14:48:51.913887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.015618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.016769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.053608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"certified-operators-rgslv\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.254449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.628931 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:48:52 crc kubenswrapper[4869]: I0202 14:48:52.862068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerStarted","Data":"500de773517075dede69276293fe3c80940ab88ef8e12edf6ec9251a25ac25db"} Feb 02 14:48:53 crc kubenswrapper[4869]: I0202 14:48:53.876191 4869 generic.go:334] "Generic (PLEG): container finished" podID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" exitCode=0 Feb 02 14:48:53 crc kubenswrapper[4869]: I0202 14:48:53.876267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92"} Feb 02 14:48:54 crc kubenswrapper[4869]: I0202 14:48:54.888992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerStarted","Data":"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d"} Feb 02 14:48:55 crc kubenswrapper[4869]: I0202 14:48:55.900280 4869 generic.go:334] "Generic (PLEG): container finished" podID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" exitCode=0 Feb 02 14:48:55 crc kubenswrapper[4869]: I0202 14:48:55.900345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d"} Feb 02 14:48:56 crc kubenswrapper[4869]: I0202 14:48:56.910684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerStarted","Data":"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6"} Feb 02 14:48:56 crc kubenswrapper[4869]: I0202 14:48:56.938106 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rgslv" podStartSLOduration=3.465744115 podStartE2EDuration="5.938077257s" podCreationTimestamp="2026-02-02 14:48:51 +0000 UTC" firstStartedPulling="2026-02-02 14:48:53.879056586 +0000 UTC m=+935.523693356" lastFinishedPulling="2026-02-02 14:48:56.351389728 +0000 UTC m=+937.996026498" observedRunningTime="2026-02-02 14:48:56.933134894 +0000 UTC m=+938.577771684" watchObservedRunningTime="2026-02-02 14:48:56.938077257 +0000 UTC m=+938.582714027" Feb 02 14:49:02 crc kubenswrapper[4869]: I0202 14:49:02.255285 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:02 crc kubenswrapper[4869]: I0202 14:49:02.256351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:02 crc kubenswrapper[4869]: I0202 14:49:02.350153 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:03 crc kubenswrapper[4869]: I0202 14:49:03.065270 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:03 crc kubenswrapper[4869]: I0202 14:49:03.206972 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:49:04 crc kubenswrapper[4869]: I0202 14:49:04.964939 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rgslv" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" containerID="cri-o://b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" gracePeriod=2 Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.475242 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.646794 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") pod \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.647053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") pod \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.647169 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") pod \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\" (UID: \"2fd4143f-0316-463b-ae6e-1dc41ade5f61\") " Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.647732 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities" (OuterVolumeSpecName: "utilities") pod "2fd4143f-0316-463b-ae6e-1dc41ade5f61" (UID: "2fd4143f-0316-463b-ae6e-1dc41ade5f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.654649 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl" (OuterVolumeSpecName: "kube-api-access-g9jrl") pod "2fd4143f-0316-463b-ae6e-1dc41ade5f61" (UID: "2fd4143f-0316-463b-ae6e-1dc41ade5f61"). InnerVolumeSpecName "kube-api-access-g9jrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.713186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fd4143f-0316-463b-ae6e-1dc41ade5f61" (UID: "2fd4143f-0316-463b-ae6e-1dc41ade5f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.748450 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.748494 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fd4143f-0316-463b-ae6e-1dc41ade5f61-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.748508 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9jrl\" (UniqueName: \"kubernetes.io/projected/2fd4143f-0316-463b-ae6e-1dc41ade5f61-kube-api-access-g9jrl\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975735 4869 generic.go:334] "Generic (PLEG): container finished" podID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" exitCode=0 Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6"} Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rgslv" event={"ID":"2fd4143f-0316-463b-ae6e-1dc41ade5f61","Type":"ContainerDied","Data":"500de773517075dede69276293fe3c80940ab88ef8e12edf6ec9251a25ac25db"} Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.975879 4869 scope.go:117] "RemoveContainer" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.976050 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rgslv" Feb 02 14:49:05 crc kubenswrapper[4869]: I0202 14:49:05.998141 4869 scope.go:117] "RemoveContainer" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.010456 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.019542 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rgslv"] Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.028294 4869 scope.go:117] "RemoveContainer" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.056044 4869 scope.go:117] "RemoveContainer" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" Feb 02 14:49:06 crc kubenswrapper[4869]: E0202 14:49:06.056784 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6\": container with ID starting with b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6 not found: ID does not exist" containerID="b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.056866 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6"} err="failed to get container status \"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6\": rpc error: code = NotFound desc = could not find container \"b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6\": container with ID starting with b6d56d114a1bdf0e8419e64d4fc005c38bd0e788fb4c7cc61c34ab014e15f9a6 not found: ID does not exist" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.056937 4869 scope.go:117] "RemoveContainer" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" Feb 02 14:49:06 crc kubenswrapper[4869]: E0202 14:49:06.057512 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d\": container with ID starting with 8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d not found: ID does not exist" containerID="8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.057563 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d"} err="failed to get container status \"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d\": rpc error: code = NotFound desc = could not find container \"8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d\": container with ID starting with 8aa7ce751e65dd26d5253e473e418f41337cc1ff03661f778587b63fed2be91d not found: ID does not exist" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.057598 4869 scope.go:117] "RemoveContainer" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" Feb 02 14:49:06 crc kubenswrapper[4869]: E0202 14:49:06.061558 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92\": container with ID starting with 8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92 not found: ID does not exist" containerID="8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92" Feb 02 14:49:06 crc kubenswrapper[4869]: I0202 14:49:06.061612 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92"} err="failed to get container status \"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92\": rpc error: code = NotFound desc = could not find container \"8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92\": container with ID starting with 8d33635f3845f506209b20be94958a4881ff1a5cdfdc8b3def6d193042486f92 not found: ID does not exist" Feb 02 14:49:07 crc kubenswrapper[4869]: I0202 14:49:07.472451 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" path="/var/lib/kubelet/pods/2fd4143f-0316-463b-ae6e-1dc41ade5f61/volumes" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.588752 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:12 crc kubenswrapper[4869]: E0202 14:49:12.590875 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.590995 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" Feb 02 14:49:12 crc kubenswrapper[4869]: E0202 14:49:12.591094 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-utilities" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.591153 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-utilities" Feb 02 14:49:12 crc kubenswrapper[4869]: E0202 14:49:12.591235 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-content" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.591314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="extract-content" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.591527 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd4143f-0316-463b-ae6e-1dc41ade5f61" containerName="registry-server" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.592618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.605572 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.762437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.762491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.762517 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.863842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.863890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.863934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.864543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.864633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.888649 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"redhat-marketplace-cgj22\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:12 crc kubenswrapper[4869]: I0202 14:49:12.913463 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:13 crc kubenswrapper[4869]: I0202 14:49:13.499219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.046093 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerID="eda72bcc55c95d316258cf868924e75f80c68e4d577ed22a50a3cec2426c387b" exitCode=0 Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.046198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"eda72bcc55c95d316258cf868924e75f80c68e4d577ed22a50a3cec2426c387b"} Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.046625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerStarted","Data":"34a6135c6d9cce7c37dc455df3519275e3b6866fffb9f04458808c6fea6ccae2"} Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.048561 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.719173 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.720389 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.722625 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cbtzv" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.738117 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.742346 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.743330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.753355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-cqqn8" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.783685 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.795041 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.796522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.800683 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-htrjw" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.823899 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.825092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.834613 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-646rv" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.857818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.863390 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.864702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.869770 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-58ccw" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.891392 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.898844 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.899209 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900106 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4qr\" (UniqueName: \"kubernetes.io/projected/f605f0c6-e023-433b-8e78-373b32387809-kube-api-access-7m4qr\") pod \"barbican-operator-controller-manager-fc589b45f-28mqn\" (UID: \"f605f0c6-e023-433b-8e78-373b32387809\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvlh\" (UniqueName: \"kubernetes.io/projected/fc6638c4-5467-48c9-b725-284cd08372f6-kube-api-access-nwvlh\") pod \"cinder-operator-controller-manager-85899c864d-4cnfc\" (UID: \"fc6638c4-5467-48c9-b725-284cd08372f6\") " pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xqx\" (UniqueName: \"kubernetes.io/projected/f07dc950-121d-4a91-8489-dfc187196775-kube-api-access-l8xqx\") pod \"glance-operator-controller-manager-5d77f4dbc9-qmt77\" (UID: \"f07dc950-121d-4a91-8489-dfc187196775\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.900210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66khg\" (UniqueName: \"kubernetes.io/projected/5ea40597-21e0-4548-ab09-e381dac894ef-kube-api-access-66khg\") pod \"designate-operator-controller-manager-8f4c5cb64-pbxmj\" (UID: \"5ea40597-21e0-4548-ab09-e381dac894ef\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.905889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pnpct" Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.914679 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x"] Feb 02 14:49:14 crc kubenswrapper[4869]: I0202 14:49:14.922704 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.010952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4qr\" (UniqueName: \"kubernetes.io/projected/f605f0c6-e023-433b-8e78-373b32387809-kube-api-access-7m4qr\") pod \"barbican-operator-controller-manager-fc589b45f-28mqn\" (UID: \"f605f0c6-e023-433b-8e78-373b32387809\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2sm\" (UniqueName: \"kubernetes.io/projected/53467de5-c9d7-4aa0-973d-180c8cb84b27-kube-api-access-xg2sm\") pod \"heat-operator-controller-manager-65dc6c8d9c-9ph7x\" (UID: \"53467de5-c9d7-4aa0-973d-180c8cb84b27\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011031 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmmd\" (UniqueName: \"kubernetes.io/projected/ad8b0f9a-67d7-4897-af4b-f344b3d1c502-kube-api-access-pcmmd\") pod \"horizon-operator-controller-manager-5fb775575f-cpjjt\" (UID: \"ad8b0f9a-67d7-4897-af4b-f344b3d1c502\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwvlh\" (UniqueName: \"kubernetes.io/projected/fc6638c4-5467-48c9-b725-284cd08372f6-kube-api-access-nwvlh\") pod \"cinder-operator-controller-manager-85899c864d-4cnfc\" (UID: \"fc6638c4-5467-48c9-b725-284cd08372f6\") " pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xqx\" (UniqueName: \"kubernetes.io/projected/f07dc950-121d-4a91-8489-dfc187196775-kube-api-access-l8xqx\") pod \"glance-operator-controller-manager-5d77f4dbc9-qmt77\" (UID: \"f07dc950-121d-4a91-8489-dfc187196775\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.011116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66khg\" (UniqueName: \"kubernetes.io/projected/5ea40597-21e0-4548-ab09-e381dac894ef-kube-api-access-66khg\") pod \"designate-operator-controller-manager-8f4c5cb64-pbxmj\" (UID: \"5ea40597-21e0-4548-ab09-e381dac894ef\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.054420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xqx\" (UniqueName: \"kubernetes.io/projected/f07dc950-121d-4a91-8489-dfc187196775-kube-api-access-l8xqx\") pod \"glance-operator-controller-manager-5d77f4dbc9-qmt77\" (UID: \"f07dc950-121d-4a91-8489-dfc187196775\") " pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.076754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66khg\" (UniqueName: \"kubernetes.io/projected/5ea40597-21e0-4548-ab09-e381dac894ef-kube-api-access-66khg\") pod \"designate-operator-controller-manager-8f4c5cb64-pbxmj\" (UID: \"5ea40597-21e0-4548-ab09-e381dac894ef\") " pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.079568 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.080779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.082124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerStarted","Data":"292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13"} Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.084018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwvlh\" (UniqueName: \"kubernetes.io/projected/fc6638c4-5467-48c9-b725-284cd08372f6-kube-api-access-nwvlh\") pod \"cinder-operator-controller-manager-85899c864d-4cnfc\" (UID: \"fc6638c4-5467-48c9-b725-284cd08372f6\") " pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.084278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.086800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4qr\" (UniqueName: \"kubernetes.io/projected/f605f0c6-e023-433b-8e78-373b32387809-kube-api-access-7m4qr\") pod \"barbican-operator-controller-manager-fc589b45f-28mqn\" (UID: \"f605f0c6-e023-433b-8e78-373b32387809\") " pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.102555 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.104774 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.107485 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-46pbm" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.108568 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jcwn9" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2sm\" (UniqueName: \"kubernetes.io/projected/53467de5-c9d7-4aa0-973d-180c8cb84b27-kube-api-access-xg2sm\") pod \"heat-operator-controller-manager-65dc6c8d9c-9ph7x\" (UID: \"53467de5-c9d7-4aa0-973d-180c8cb84b27\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119270 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmmd\" (UniqueName: \"kubernetes.io/projected/ad8b0f9a-67d7-4897-af4b-f344b3d1c502-kube-api-access-pcmmd\") pod \"horizon-operator-controller-manager-5fb775575f-cpjjt\" (UID: \"ad8b0f9a-67d7-4897-af4b-f344b3d1c502\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz42l\" (UniqueName: \"kubernetes.io/projected/77902d6e-ef76-42b0-a40c-0b51f383f580-kube-api-access-nz42l\") pod \"ironic-operator-controller-manager-87bd9d46f-762xj\" (UID: \"77902d6e-ef76-42b0-a40c-0b51f383f580\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.119449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmsc\" (UniqueName: \"kubernetes.io/projected/c0779518-9e33-43e3-b373-263d74fbbd0f-kube-api-access-vpmsc\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.131100 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.132727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.136053 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hmzpm" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.146603 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2sm\" (UniqueName: \"kubernetes.io/projected/53467de5-c9d7-4aa0-973d-180c8cb84b27-kube-api-access-xg2sm\") pod \"heat-operator-controller-manager-65dc6c8d9c-9ph7x\" (UID: \"53467de5-c9d7-4aa0-973d-180c8cb84b27\") " pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.159879 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.167045 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.175060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.175757 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmmd\" (UniqueName: \"kubernetes.io/projected/ad8b0f9a-67d7-4897-af4b-f344b3d1c502-kube-api-access-pcmmd\") pod \"horizon-operator-controller-manager-5fb775575f-cpjjt\" (UID: \"ad8b0f9a-67d7-4897-af4b-f344b3d1c502\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.184521 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.185701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.190357 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.193946 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.195497 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.201920 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gll2h" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.202613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.204544 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tdm6w" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.214049 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8tj4\" (UniqueName: \"kubernetes.io/projected/f27a3d01-fbc5-46d9-9c11-ef6c21ead605-kube-api-access-m8tj4\") pod \"keystone-operator-controller-manager-64469b487f-m9czv\" (UID: \"f27a3d01-fbc5-46d9-9c11-ef6c21ead605\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xsn\" (UniqueName: \"kubernetes.io/projected/993dae41-359f-47f7-9a2a-38f7c97d49de-kube-api-access-m8xsn\") pod \"manila-operator-controller-manager-7775d87d9d-l2b72\" (UID: \"993dae41-359f-47f7-9a2a-38f7c97d49de\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.224514 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.224592 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:15.724565701 +0000 UTC m=+957.369202471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwfs\" (UniqueName: \"kubernetes.io/projected/3b0cf904-7af8-4e57-a664-7e594e557445-kube-api-access-7xwfs\") pod \"mariadb-operator-controller-manager-67bf948998-hpnsb\" (UID: \"3b0cf904-7af8-4e57-a664-7e594e557445\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz42l\" (UniqueName: \"kubernetes.io/projected/77902d6e-ef76-42b0-a40c-0b51f383f580-kube-api-access-nz42l\") pod \"ironic-operator-controller-manager-87bd9d46f-762xj\" (UID: \"77902d6e-ef76-42b0-a40c-0b51f383f580\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.224695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmsc\" (UniqueName: \"kubernetes.io/projected/c0779518-9e33-43e3-b373-263d74fbbd0f-kube-api-access-vpmsc\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.247245 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-swhqr"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.248276 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.261559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r7q9n" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.275369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.287891 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.289398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmsc\" (UniqueName: \"kubernetes.io/projected/c0779518-9e33-43e3-b373-263d74fbbd0f-kube-api-access-vpmsc\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.289606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.290242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz42l\" (UniqueName: \"kubernetes.io/projected/77902d6e-ef76-42b0-a40c-0b51f383f580-kube-api-access-nz42l\") pod \"ironic-operator-controller-manager-87bd9d46f-762xj\" (UID: \"77902d6e-ef76-42b0-a40c-0b51f383f580\") " pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.294558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.300191 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2jrdb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.307152 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.307229 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.330059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8tj4\" (UniqueName: \"kubernetes.io/projected/f27a3d01-fbc5-46d9-9c11-ef6c21ead605-kube-api-access-m8tj4\") pod \"keystone-operator-controller-manager-64469b487f-m9czv\" (UID: \"f27a3d01-fbc5-46d9-9c11-ef6c21ead605\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.385170 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-swhqr"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.385549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xsn\" (UniqueName: \"kubernetes.io/projected/993dae41-359f-47f7-9a2a-38f7c97d49de-kube-api-access-m8xsn\") pod \"manila-operator-controller-manager-7775d87d9d-l2b72\" (UID: \"993dae41-359f-47f7-9a2a-38f7c97d49de\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.385680 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xwfs\" (UniqueName: \"kubernetes.io/projected/3b0cf904-7af8-4e57-a664-7e594e557445-kube-api-access-7xwfs\") pod \"mariadb-operator-controller-manager-67bf948998-hpnsb\" (UID: \"3b0cf904-7af8-4e57-a664-7e594e557445\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.389207 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.390075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.394467 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.430240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8tj4\" (UniqueName: \"kubernetes.io/projected/f27a3d01-fbc5-46d9-9c11-ef6c21ead605-kube-api-access-m8tj4\") pod \"keystone-operator-controller-manager-64469b487f-m9czv\" (UID: \"f27a3d01-fbc5-46d9-9c11-ef6c21ead605\") " pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.460101 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-2chmz"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.461883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xsn\" (UniqueName: \"kubernetes.io/projected/993dae41-359f-47f7-9a2a-38f7c97d49de-kube-api-access-m8xsn\") pod \"manila-operator-controller-manager-7775d87d9d-l2b72\" (UID: \"993dae41-359f-47f7-9a2a-38f7c97d49de\") " pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.462191 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.476734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xwfs\" (UniqueName: \"kubernetes.io/projected/3b0cf904-7af8-4e57-a664-7e594e557445-kube-api-access-7xwfs\") pod \"mariadb-operator-controller-manager-67bf948998-hpnsb\" (UID: \"3b0cf904-7af8-4e57-a664-7e594e557445\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.484573 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fgdqw" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.493156 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkctg\" (UniqueName: \"kubernetes.io/projected/7e9b35b2-f20d-4102-b541-63d2822c215d-kube-api-access-rkctg\") pod \"octavia-operator-controller-manager-7b89ddb58-h2kl2\" (UID: \"7e9b35b2-f20d-4102-b541-63d2822c215d\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.493293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8j79\" (UniqueName: \"kubernetes.io/projected/98a25bb6-75b1-49ad-8d7c-cc4e763470ec-kube-api-access-j8j79\") pod \"nova-operator-controller-manager-5644b66645-2chmz\" (UID: \"98a25bb6-75b1-49ad-8d7c-cc4e763470ec\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.493328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5fj\" (UniqueName: \"kubernetes.io/projected/c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb-kube-api-access-wj5fj\") pod \"neutron-operator-controller-manager-576995988b-swhqr\" (UID: \"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.511812 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.515441 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-2chmz"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.525565 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.539866 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.552811 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.554235 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.557555 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xvmqq" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.563817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.582209 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.583482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.588149 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.589063 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.594533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkctg\" (UniqueName: \"kubernetes.io/projected/7e9b35b2-f20d-4102-b541-63d2822c215d-kube-api-access-rkctg\") pod \"octavia-operator-controller-manager-7b89ddb58-h2kl2\" (UID: \"7e9b35b2-f20d-4102-b541-63d2822c215d\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.594640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8j79\" (UniqueName: \"kubernetes.io/projected/98a25bb6-75b1-49ad-8d7c-cc4e763470ec-kube-api-access-j8j79\") pod \"nova-operator-controller-manager-5644b66645-2chmz\" (UID: \"98a25bb6-75b1-49ad-8d7c-cc4e763470ec\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.594676 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5fj\" (UniqueName: \"kubernetes.io/projected/c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb-kube-api-access-wj5fj\") pod \"neutron-operator-controller-manager-576995988b-swhqr\" (UID: \"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.595679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kggvj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.630037 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.631733 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.647404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hgpvb" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.675758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8j79\" (UniqueName: \"kubernetes.io/projected/98a25bb6-75b1-49ad-8d7c-cc4e763470ec-kube-api-access-j8j79\") pod \"nova-operator-controller-manager-5644b66645-2chmz\" (UID: \"98a25bb6-75b1-49ad-8d7c-cc4e763470ec\") " pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.690047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.692754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5fj\" (UniqueName: \"kubernetes.io/projected/c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb-kube-api-access-wj5fj\") pod \"neutron-operator-controller-manager-576995988b-swhqr\" (UID: \"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb\") " pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdv9\" (UniqueName: \"kubernetes.io/projected/bd94e783-b3ec-4d7e-b669-98255f029da6-kube-api-access-qhdv9\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxl8\" (UniqueName: \"kubernetes.io/projected/ac2b0707-5906-40df-9457-06739f19df84-kube-api-access-mfxl8\") pod \"placement-operator-controller-manager-5b964cf4cd-6vnjh\" (UID: \"ac2b0707-5906-40df-9457-06739f19df84\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.702704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jn2g\" (UniqueName: \"kubernetes.io/projected/cf357940-5e8d-4111-86e6-1fafd5e670cd-kube-api-access-7jn2g\") pod \"ovn-operator-controller-manager-788c46999f-28zx5\" (UID: \"cf357940-5e8d-4111-86e6-1fafd5e670cd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.703724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkctg\" (UniqueName: \"kubernetes.io/projected/7e9b35b2-f20d-4102-b541-63d2822c215d-kube-api-access-rkctg\") pod \"octavia-operator-controller-manager-7b89ddb58-h2kl2\" (UID: \"7e9b35b2-f20d-4102-b541-63d2822c215d\") " pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.726906 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.737754 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.742614 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pvkqz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.745900 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.778164 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.787648 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jn2g\" (UniqueName: \"kubernetes.io/projected/cf357940-5e8d-4111-86e6-1fafd5e670cd-kube-api-access-7jn2g\") pod \"ovn-operator-controller-manager-788c46999f-28zx5\" (UID: \"cf357940-5e8d-4111-86e6-1fafd5e670cd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhdv9\" (UniqueName: \"kubernetes.io/projected/bd94e783-b3ec-4d7e-b669-98255f029da6-kube-api-access-qhdv9\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.808787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxl8\" (UniqueName: \"kubernetes.io/projected/ac2b0707-5906-40df-9457-06739f19df84-kube-api-access-mfxl8\") pod \"placement-operator-controller-manager-5b964cf4cd-6vnjh\" (UID: \"ac2b0707-5906-40df-9457-06739f19df84\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810005 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810064 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.310045131 +0000 UTC m=+957.954681901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810451 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: E0202 14:49:15.810476 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.810467582 +0000 UTC m=+958.455104352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.820978 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.845281 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.854422 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sfb8j" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.887536 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.902939 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.910556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bw9p\" (UniqueName: \"kubernetes.io/projected/98a357a8-0e70-4f30-a41a-8dde25612a8a-kube-api-access-9bw9p\") pod \"swift-operator-controller-manager-7b89fdf75b-zdwh8\" (UID: \"98a357a8-0e70-4f30-a41a-8dde25612a8a\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.910637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7b2\" (UniqueName: \"kubernetes.io/projected/7af79025-a32d-4e73-9559-5991093e986a-kube-api-access-kc7b2\") pod \"telemetry-operator-controller-manager-565849b54-fm2kj\" (UID: \"7af79025-a32d-4e73-9559-5991093e986a\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.911450 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.924265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhdv9\" (UniqueName: \"kubernetes.io/projected/bd94e783-b3ec-4d7e-b669-98255f029da6-kube-api-access-qhdv9\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.926549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jn2g\" (UniqueName: \"kubernetes.io/projected/cf357940-5e8d-4111-86e6-1fafd5e670cd-kube-api-access-7jn2g\") pod \"ovn-operator-controller-manager-788c46999f-28zx5\" (UID: \"cf357940-5e8d-4111-86e6-1fafd5e670cd\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.932854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxl8\" (UniqueName: \"kubernetes.io/projected/ac2b0707-5906-40df-9457-06739f19df84-kube-api-access-mfxl8\") pod \"placement-operator-controller-manager-5b964cf4cd-6vnjh\" (UID: \"ac2b0707-5906-40df-9457-06739f19df84\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.933003 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.934412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.944279 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.951803 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vn67c" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.955775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.970333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2gbsl" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.970553 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.977147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5"] Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.990021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:49:15 crc kubenswrapper[4869]: I0202 14:49:15.990728 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.011860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7b2\" (UniqueName: \"kubernetes.io/projected/7af79025-a32d-4e73-9559-5991093e986a-kube-api-access-kc7b2\") pod \"telemetry-operator-controller-manager-565849b54-fm2kj\" (UID: \"7af79025-a32d-4e73-9559-5991093e986a\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.012067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bw9p\" (UniqueName: \"kubernetes.io/projected/98a357a8-0e70-4f30-a41a-8dde25612a8a-kube-api-access-9bw9p\") pod \"swift-operator-controller-manager-7b89fdf75b-zdwh8\" (UID: \"98a357a8-0e70-4f30-a41a-8dde25612a8a\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.024010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.069648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bw9p\" (UniqueName: \"kubernetes.io/projected/98a357a8-0e70-4f30-a41a-8dde25612a8a-kube-api-access-9bw9p\") pod \"swift-operator-controller-manager-7b89fdf75b-zdwh8\" (UID: \"98a357a8-0e70-4f30-a41a-8dde25612a8a\") " pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.075755 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.079235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7b2\" (UniqueName: \"kubernetes.io/projected/7af79025-a32d-4e73-9559-5991093e986a-kube-api-access-kc7b2\") pod \"telemetry-operator-controller-manager-565849b54-fm2kj\" (UID: \"7af79025-a32d-4e73-9559-5991093e986a\") " pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.128267 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerID="292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13" exitCode=0 Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.128327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13"} Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.128824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd44g\" (UniqueName: \"kubernetes.io/projected/2dfa14d3-9496-44cb-948b-e4065a9930c8-kube-api-access-zd44g\") pod \"watcher-operator-controller-manager-586b95b788-9fsf5\" (UID: \"2dfa14d3-9496-44cb-948b-e4065a9930c8\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.129062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwlj\" (UniqueName: \"kubernetes.io/projected/06f5e083-c0ea-4ad0-9a07-50707d84be61-kube-api-access-5zwlj\") pod \"test-operator-controller-manager-56f8bfcd9f-ntthk\" (UID: \"06f5e083-c0ea-4ad0-9a07-50707d84be61\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.144352 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.145980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.152616 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.152812 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.155512 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-649np" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.179604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.208740 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.210146 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.227640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d5tx6" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.298643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwlj\" (UniqueName: \"kubernetes.io/projected/06f5e083-c0ea-4ad0-9a07-50707d84be61-kube-api-access-5zwlj\") pod \"test-operator-controller-manager-56f8bfcd9f-ntthk\" (UID: \"06f5e083-c0ea-4ad0-9a07-50707d84be61\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.298722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rtr\" (UniqueName: \"kubernetes.io/projected/6719d674-1dac-4af1-859b-ea6a2186a20a-kube-api-access-59rtr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-djzsw\" (UID: \"6719d674-1dac-4af1-859b-ea6a2186a20a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.298794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.299021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.299109 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd44g\" (UniqueName: \"kubernetes.io/projected/2dfa14d3-9496-44cb-948b-e4065a9930c8-kube-api-access-zd44g\") pod \"watcher-operator-controller-manager-586b95b788-9fsf5\" (UID: \"2dfa14d3-9496-44cb-948b-e4065a9930c8\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.299148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4sh\" (UniqueName: \"kubernetes.io/projected/32aa6b38-d480-426c-a36c-4cf34c082e73-kube-api-access-vz4sh\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.356976 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.384340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwlj\" (UniqueName: \"kubernetes.io/projected/06f5e083-c0ea-4ad0-9a07-50707d84be61-kube-api-access-5zwlj\") pod \"test-operator-controller-manager-56f8bfcd9f-ntthk\" (UID: \"06f5e083-c0ea-4ad0-9a07-50707d84be61\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.386710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd44g\" (UniqueName: \"kubernetes.io/projected/2dfa14d3-9496-44cb-948b-e4065a9930c8-kube-api-access-zd44g\") pod \"watcher-operator-controller-manager-586b95b788-9fsf5\" (UID: \"2dfa14d3-9496-44cb-948b-e4065a9930c8\") " pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.390259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4sh\" (UniqueName: \"kubernetes.io/projected/32aa6b38-d480-426c-a36c-4cf34c082e73-kube-api-access-vz4sh\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59rtr\" (UniqueName: \"kubernetes.io/projected/6719d674-1dac-4af1-859b-ea6a2186a20a-kube-api-access-59rtr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-djzsw\" (UID: \"6719d674-1dac-4af1-859b-ea6a2186a20a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.410528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.410842 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.410954 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:17.410928734 +0000 UTC m=+959.055565504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.411600 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.411646 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.911633151 +0000 UTC m=+958.556269911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.412151 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.412209 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:16.912198735 +0000 UTC m=+958.556835505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.422397 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.435812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.452256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59rtr\" (UniqueName: \"kubernetes.io/projected/6719d674-1dac-4af1-859b-ea6a2186a20a-kube-api-access-59rtr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-djzsw\" (UID: \"6719d674-1dac-4af1-859b-ea6a2186a20a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.468235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4sh\" (UniqueName: \"kubernetes.io/projected/32aa6b38-d480-426c-a36c-4cf34c082e73-kube-api-access-vz4sh\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.491764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.712841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77"] Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.830384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.830721 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.830817 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:18.830774899 +0000 UTC m=+960.475411669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: W0202 14:49:16.840563 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf07dc950_121d_4a91_8489_dfc187196775.slice/crio-c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22 WatchSource:0}: Error finding container c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22: Status 404 returned error can't find the container with id c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22 Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.947297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.947434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.947751 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.947833 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:17.947809589 +0000 UTC m=+959.592446359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.948426 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: E0202 14:49:16.948470 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:17.948459495 +0000 UTC m=+959.593096265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:16 crc kubenswrapper[4869]: I0202 14:49:16.988841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.010884 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.067490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.088298 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc"] Feb 02 14:49:17 crc kubenswrapper[4869]: W0202 14:49:17.157696 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53467de5_c9d7_4aa0_973d_180c8cb84b27.slice/crio-eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37 WatchSource:0}: Error finding container eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37: Status 404 returned error can't find the container with id eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37 Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.157931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" event={"ID":"5ea40597-21e0-4548-ab09-e381dac894ef","Type":"ContainerStarted","Data":"66c6fd837dcd71931e3097318cf979cba422c0b7036eacce4cb44efeabc22bc3"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.161576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" event={"ID":"f07dc950-121d-4a91-8489-dfc187196775","Type":"ContainerStarted","Data":"c5b6c2c0ab2a193be81f56cce2ac2d8686711474e89ec3b452596e7e59e52e22"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.167590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" event={"ID":"ad8b0f9a-67d7-4897-af4b-f344b3d1c502","Type":"ContainerStarted","Data":"9ec46e395679c23eeb9c8f74127a0244184326a8559ec2b1db534251ce0c0846"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.169569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" event={"ID":"fc6638c4-5467-48c9-b725-284cd08372f6","Type":"ContainerStarted","Data":"3c3b47259b7c0fc9966a57a2b37172aec96795f374100a8b07641e0b88e85a16"} Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.459731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.460407 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.460474 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:19.460454243 +0000 UTC m=+961.105091013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.523073 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.533690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.550241 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb"] Feb 02 14:49:17 crc kubenswrapper[4869]: W0202 14:49:17.570988 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b0cf904_7af8_4e57_a664_7e594e557445.slice/crio-8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99 WatchSource:0}: Error finding container 8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99: Status 404 returned error can't find the container with id 8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99 Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.776827 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.812386 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.870495 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8"] Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.970933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:17 crc kubenswrapper[4869]: I0202 14:49:17.971073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971319 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971403 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:19.971373795 +0000 UTC m=+961.616010565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971467 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:17 crc kubenswrapper[4869]: E0202 14:49:17.971493 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:19.971484618 +0000 UTC m=+961.616121388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.013282 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.033425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.045690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.074369 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.074641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.074815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.076603 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5644b66645-2chmz"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.177678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.177774 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.177860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.182598 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.183557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.212596 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" event={"ID":"53467de5-c9d7-4aa0-973d-180c8cb84b27","Type":"ContainerStarted","Data":"eded51121e2b9783991b25ec7ed189b4ed23d44b3ece0f69f817d2f14f092c37"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.226276 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"community-operators-mk6t7\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.228616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" event={"ID":"f27a3d01-fbc5-46d9-9c11-ef6c21ead605","Type":"ContainerStarted","Data":"0ebd7b98b948904756d3563f45b1c8df7ec70ea597dc9a010bc530676e6f73a6"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.241011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerStarted","Data":"ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.249084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" event={"ID":"77902d6e-ef76-42b0-a40c-0b51f383f580","Type":"ContainerStarted","Data":"00ea06048ddc8667830932e41773107435e41ca5403583340fe6f4b0ba9e7248"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.291319 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.307810 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.312703 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cgj22" podStartSLOduration=3.487441797 podStartE2EDuration="6.312673244s" podCreationTimestamp="2026-02-02 14:49:12 +0000 UTC" firstStartedPulling="2026-02-02 14:49:14.048198338 +0000 UTC m=+955.692835108" lastFinishedPulling="2026-02-02 14:49:16.873429785 +0000 UTC m=+958.518066555" observedRunningTime="2026-02-02 14:49:18.280720682 +0000 UTC m=+959.925357452" watchObservedRunningTime="2026-02-02 14:49:18.312673244 +0000 UTC m=+959.957310014" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.332307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" event={"ID":"98a357a8-0e70-4f30-a41a-8dde25612a8a","Type":"ContainerStarted","Data":"07ac21f91f40125a213a6bd8f6b22e8cc4accd5f96868be2a5f0564d14e942e9"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.338621 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.340257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" event={"ID":"f605f0c6-e023-433b-8e78-373b32387809","Type":"ContainerStarted","Data":"ac8117740631684f2b607a6456bc5d0ae94ea118c1bf1ebc98c98c2571998033"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.347824 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5"] Feb 02 14:49:18 crc kubenswrapper[4869]: W0202 14:49:18.349079 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af79025_a32d_4e73_9559_5991093e986a.slice/crio-9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced WatchSource:0}: Error finding container 9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced: Status 404 returned error can't find the container with id 9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.363359 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh"] Feb 02 14:49:18 crc kubenswrapper[4869]: W0202 14:49:18.371050 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf357940_5e8d_4111_86e6_1fafd5e670cd.slice/crio-926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e WatchSource:0}: Error finding container 926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e: Status 404 returned error can't find the container with id 926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.370259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-576995988b-swhqr"] Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.371759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" event={"ID":"98a25bb6-75b1-49ad-8d7c-cc4e763470ec","Type":"ContainerStarted","Data":"64a89e976ccd1c3efced28ada4285b1efdcbdd3a1c28ca634a1b93949bda31ef"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.376117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" event={"ID":"3b0cf904-7af8-4e57-a664-7e594e557445","Type":"ContainerStarted","Data":"8f1f6328a62edc63fb63c15d2a966bc49cd12e0fd0e67626215053b5e8305f99"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.408486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5"] Feb 02 14:49:18 crc kubenswrapper[4869]: W0202 14:49:18.408526 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06f5e083_c0ea_4ad0_9a07_50707d84be61.slice/crio-e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a WatchSource:0}: Error finding container e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a: Status 404 returned error can't find the container with id e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.410612 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.443931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" event={"ID":"993dae41-359f-47f7-9a2a-38f7c97d49de","Type":"ContainerStarted","Data":"92f2d4cc86f3d1a27e46b54ba4f6d0191c419271b083d99ade0721689e9a6ffa"} Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.458306 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw"] Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.486431 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd44g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-586b95b788-9fsf5_openstack-operators(2dfa14d3-9496-44cb-948b-e4065a9930c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.487735 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podUID="2dfa14d3-9496-44cb-948b-e4065a9930c8" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.495162 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59rtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-djzsw_openstack-operators(6719d674-1dac-4af1-859b-ea6a2186a20a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.496879 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:18 crc kubenswrapper[4869]: I0202 14:49:18.931261 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.931936 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:18 crc kubenswrapper[4869]: E0202 14:49:18.932058 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:22.932022393 +0000 UTC m=+964.576659343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.253641 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:49:19 crc kubenswrapper[4869]: W0202 14:49:19.344144 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8bef13a_7759_4c87_be0b_09017f74f36e.slice/crio-b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a WatchSource:0}: Error finding container b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a: Status 404 returned error can't find the container with id b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.536801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" event={"ID":"cf357940-5e8d-4111-86e6-1fafd5e670cd","Type":"ContainerStarted","Data":"926e15b739c6f03ebe2b4e4dc35188306c5223bc2e03a3e6f0c7ffb2aaef088e"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.538727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" event={"ID":"2dfa14d3-9496-44cb-948b-e4065a9930c8","Type":"ContainerStarted","Data":"770a9320d96169d0bbb22a9377187377241d576110e2a54baf61ea71b02dfce8"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.560885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.562745 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.562805 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:23.562785376 +0000 UTC m=+965.207422146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.587651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podUID="2dfa14d3-9496-44cb-948b-e4065a9930c8" Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.588435 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerStarted","Data":"b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.614554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" event={"ID":"6719d674-1dac-4af1-859b-ea6a2186a20a","Type":"ContainerStarted","Data":"a2218b87a7b0fae5af909cb8be6f92dbe6e298bd3eb6f3252f40f1912552acea"} Feb 02 14:49:19 crc kubenswrapper[4869]: E0202 14:49:19.621850 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.625747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" event={"ID":"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb","Type":"ContainerStarted","Data":"7eb1becba457956f29745fa0781faa4b802729fe13f354544b25af7864351dcc"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.649122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" event={"ID":"06f5e083-c0ea-4ad0-9a07-50707d84be61","Type":"ContainerStarted","Data":"e7e8f26ce23a730172aff1faf8e6aa0a150fa76b34f4d81f2f2a8857bb0e9c1a"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.695376 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" event={"ID":"ac2b0707-5906-40df-9457-06739f19df84","Type":"ContainerStarted","Data":"88c434aad9ad58199752e96590ad12e2c6b934f4898a7cc0f7e46791b942e5e3"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.705537 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" event={"ID":"7af79025-a32d-4e73-9559-5991093e986a","Type":"ContainerStarted","Data":"9ac6741e8253fe00ef5c5537ae260dee0b72449d5d49db322b94f71abd6c6ced"} Feb 02 14:49:19 crc kubenswrapper[4869]: I0202 14:49:19.717603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" event={"ID":"7e9b35b2-f20d-4102-b541-63d2822c215d","Type":"ContainerStarted","Data":"3a13c4491e87656cc0b11ffcec9957dc38d9e5630a640ace1b6c38b86044ae20"} Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.083235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.083895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.084300 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.084382 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:24.084358172 +0000 UTC m=+965.728994942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.085482 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.085542 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:24.085528611 +0000 UTC m=+965.730165381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.744179 4869 generic.go:334] "Generic (PLEG): container finished" podID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" exitCode=0 Feb 02 14:49:20 crc kubenswrapper[4869]: I0202 14:49:20.746155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f"} Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.748545 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:3fd1f7623a4b32505f51f329116f7e13bb4cfd320e920961a5b86441a89326d6\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podUID="2dfa14d3-9496-44cb-948b-e4065a9930c8" Feb 02 14:49:20 crc kubenswrapper[4869]: E0202 14:49:20.748623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:22 crc kubenswrapper[4869]: I0202 14:49:22.914217 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:22 crc kubenswrapper[4869]: I0202 14:49:22.924454 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:22 crc kubenswrapper[4869]: I0202 14:49:22.984287 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:22 crc kubenswrapper[4869]: E0202 14:49:22.984551 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:22 crc kubenswrapper[4869]: E0202 14:49:22.984674 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert podName:c0779518-9e33-43e3-b373-263d74fbbd0f nodeName:}" failed. No retries permitted until 2026-02-02 14:49:30.98464165 +0000 UTC m=+972.629278600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert") pod "infra-operator-controller-manager-79955696d6-b4jxj" (UID: "c0779518-9e33-43e3-b373-263d74fbbd0f") : secret "infra-operator-webhook-server-cert" not found Feb 02 14:49:23 crc kubenswrapper[4869]: I0202 14:49:23.033299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:23 crc kubenswrapper[4869]: I0202 14:49:23.604897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:23 crc kubenswrapper[4869]: E0202 14:49:23.605213 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:23 crc kubenswrapper[4869]: E0202 14:49:23.605307 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert podName:bd94e783-b3ec-4d7e-b669-98255f029da6 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:31.605275661 +0000 UTC m=+973.249912431 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" (UID: "bd94e783-b3ec-4d7e-b669-98255f029da6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 14:49:23 crc kubenswrapper[4869]: I0202 14:49:23.866192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:24 crc kubenswrapper[4869]: I0202 14:49:24.116290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:24 crc kubenswrapper[4869]: I0202 14:49:24.116435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116598 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116735 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:32.116704005 +0000 UTC m=+973.761340965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116626 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:24 crc kubenswrapper[4869]: E0202 14:49:24.116812 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:32.116790127 +0000 UTC m=+973.761426897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:25 crc kubenswrapper[4869]: I0202 14:49:25.153672 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:26 crc kubenswrapper[4869]: I0202 14:49:26.832471 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" containerID="cri-o://ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" gracePeriod=2 Feb 02 14:49:27 crc kubenswrapper[4869]: I0202 14:49:27.841500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerStarted","Data":"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b"} Feb 02 14:49:27 crc kubenswrapper[4869]: I0202 14:49:27.845784 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" exitCode=0 Feb 02 14:49:27 crc kubenswrapper[4869]: I0202 14:49:27.845870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091"} Feb 02 14:49:28 crc kubenswrapper[4869]: I0202 14:49:28.856418 4869 generic.go:334] "Generic (PLEG): container finished" podID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" exitCode=0 Feb 02 14:49:28 crc kubenswrapper[4869]: I0202 14:49:28.856483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b"} Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.050479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.067635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0779518-9e33-43e3-b373-263d74fbbd0f-cert\") pod \"infra-operator-controller-manager-79955696d6-b4jxj\" (UID: \"c0779518-9e33-43e3-b373-263d74fbbd0f\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.092678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-46pbm" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.101038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.660480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.666626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd94e783-b3ec-4d7e-b669-98255f029da6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl\" (UID: \"bd94e783-b3ec-4d7e-b669-98255f029da6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.818981 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xvmqq" Feb 02 14:49:31 crc kubenswrapper[4869]: I0202 14:49:31.827706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:49:32 crc kubenswrapper[4869]: I0202 14:49:32.179514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:32 crc kubenswrapper[4869]: I0202 14:49:32.179624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179766 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179822 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179889 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:48.179868719 +0000 UTC m=+989.824505489 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "webhook-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.179923 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs podName:32aa6b38-d480-426c-a36c-4cf34c082e73 nodeName:}" failed. No retries permitted until 2026-02-02 14:49:48.17989913 +0000 UTC m=+989.824535900 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs") pod "openstack-operator-controller-manager-58566f7c4b-mnxtb" (UID: "32aa6b38-d480-426c-a36c-4cf34c082e73") : secret "metrics-server-cert" not found Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.914870 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.915734 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.916361 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:32 crc kubenswrapper[4869]: E0202 14:49:32.916407 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.463820 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:9fa80e6901c5db08f3ed7bece144698223b0b60d2309a2b509b0a23dd07042d9" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.464090 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:9fa80e6901c5db08f3ed7bece144698223b0b60d2309a2b509b0a23dd07042d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nz42l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-87bd9d46f-762xj_openstack-operators(77902d6e-ef76-42b0-a40c-0b51f383f580): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.465332 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" podUID="77902d6e-ef76-42b0-a40c-0b51f383f580" Feb 02 14:49:33 crc kubenswrapper[4869]: E0202 14:49:33.895378 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:9fa80e6901c5db08f3ed7bece144698223b0b60d2309a2b509b0a23dd07042d9\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" podUID="77902d6e-ef76-42b0-a40c-0b51f383f580" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.165224 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:be0d0110cb736cbaaf0508da2a961913ca822bbaf5592ae8f23812570d9c2120" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.165530 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:be0d0110cb736cbaaf0508da2a961913ca822bbaf5592ae8f23812570d9c2120,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8xsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7775d87d9d-l2b72_openstack-operators(993dae41-359f-47f7-9a2a-38f7c97d49de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.166830 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" podUID="993dae41-359f-47f7-9a2a-38f7c97d49de" Feb 02 14:49:34 crc kubenswrapper[4869]: E0202 14:49:34.901885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:be0d0110cb736cbaaf0508da2a961913ca822bbaf5592ae8f23812570d9c2120\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" podUID="993dae41-359f-47f7-9a2a-38f7c97d49de" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.036707 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.037024 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xwfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-hpnsb_openstack-operators(3b0cf904-7af8-4e57-a664-7e594e557445): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.038294 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" podUID="3b0cf904-7af8-4e57-a664-7e594e557445" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.740495 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:0d329ab746aa36e748f3d236599b186dc9787c63630f91bc2975d7e784d837be" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.740792 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:0d329ab746aa36e748f3d236599b186dc9787c63630f91bc2975d7e784d837be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66khg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-8f4c5cb64-pbxmj_openstack-operators(5ea40597-21e0-4548-ab09-e381dac894ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.742068 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" podUID="5ea40597-21e0-4548-ab09-e381dac894ef" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.909497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:0d329ab746aa36e748f3d236599b186dc9787c63630f91bc2975d7e784d837be\\\"\"" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" podUID="5ea40597-21e0-4548-ab09-e381dac894ef" Feb 02 14:49:35 crc kubenswrapper[4869]: E0202 14:49:35.911437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" podUID="3b0cf904-7af8-4e57-a664-7e594e557445" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.454111 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.454855 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfxl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-6vnjh_openstack-operators(ac2b0707-5906-40df-9457-06739f19df84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.456178 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" podUID="ac2b0707-5906-40df-9457-06739f19df84" Feb 02 14:49:36 crc kubenswrapper[4869]: E0202 14:49:36.921088 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" podUID="ac2b0707-5906-40df-9457-06739f19df84" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.266059 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:674639c6f9130078d6b5e4bace30435325651c82f3090681562c9cf6655b9576" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.266331 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:674639c6f9130078d6b5e4bace30435325651c82f3090681562c9cf6655b9576,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kc7b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-565849b54-fm2kj_openstack-operators(7af79025-a32d-4e73-9559-5991093e986a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.268256 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" podUID="7af79025-a32d-4e73-9559-5991093e986a" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.926925 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:674639c6f9130078d6b5e4bace30435325651c82f3090681562c9cf6655b9576\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" podUID="7af79025-a32d-4e73-9559-5991093e986a" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.950111 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.950359 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bw9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-7b89fdf75b-zdwh8_openstack-operators(98a357a8-0e70-4f30-a41a-8dde25612a8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:37 crc kubenswrapper[4869]: E0202 14:49:37.951622 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" podUID="98a357a8-0e70-4f30-a41a-8dde25612a8a" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.695332 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.695958 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zwlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-ntthk_openstack-operators(06f5e083-c0ea-4ad0-9a07-50707d84be61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.697507 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" podUID="06f5e083-c0ea-4ad0-9a07-50707d84be61" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.935953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" podUID="06f5e083-c0ea-4ad0-9a07-50707d84be61" Feb 02 14:49:38 crc kubenswrapper[4869]: E0202 14:49:38.936443 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:8f8c3f4484960b48b4aa30b66deb78e54443e5d0a91ce7e34f3cd34675d7eda4\\\"\"" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" podUID="98a357a8-0e70-4f30-a41a-8dde25612a8a" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.422853 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:840e391b9a51241176705a421996a17a1433878433ce8720d4ed1a4b69319ccd" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.423201 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:840e391b9a51241176705a421996a17a1433878433ce8720d4ed1a4b69319ccd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7m4qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-fc589b45f-28mqn_openstack-operators(f605f0c6-e023-433b-8e78-373b32387809): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.424474 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" podUID="f605f0c6-e023-433b-8e78-373b32387809" Feb 02 14:49:39 crc kubenswrapper[4869]: E0202 14:49:39.942220 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:840e391b9a51241176705a421996a17a1433878433ce8720d4ed1a4b69319ccd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" podUID="f605f0c6-e023-433b-8e78-373b32387809" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.632669 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:3b23ff94b16ca60ae67e31a0f4e85af246c7f16dd03ed8ab6f33f81b3a3a8aa8" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.633375 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:3b23ff94b16ca60ae67e31a0f4e85af246c7f16dd03ed8ab6f33f81b3a3a8aa8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8xqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5d77f4dbc9-qmt77_openstack-operators(f07dc950-121d-4a91-8489-dfc187196775): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.635139 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" podUID="f07dc950-121d-4a91-8489-dfc187196775" Feb 02 14:49:41 crc kubenswrapper[4869]: E0202 14:49:41.953628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:3b23ff94b16ca60ae67e31a0f4e85af246c7f16dd03ed8ab6f33f81b3a3a8aa8\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" podUID="f07dc950-121d-4a91-8489-dfc187196775" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.270508 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.270759 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jn2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-28zx5_openstack-operators(cf357940-5e8d-4111-86e6-1fafd5e670cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.272233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" podUID="cf357940-5e8d-4111-86e6-1fafd5e670cd" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.915627 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.916378 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.916821 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.916862 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:49:42 crc kubenswrapper[4869]: E0202 14:49:42.981962 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" podUID="cf357940-5e8d-4111-86e6-1fafd5e670cd" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.757171 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.758071 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkctg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7b89ddb58-h2kl2_openstack-operators(7e9b35b2-f20d-4102-b541-63d2822c215d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.759529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" podUID="7e9b35b2-f20d-4102-b541-63d2822c215d" Feb 02 14:49:43 crc kubenswrapper[4869]: E0202 14:49:43.987501 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:cb65c47d365cb65a29236ac7c457cbbbff75da7389dddc92859e087dea1face9\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" podUID="7e9b35b2-f20d-4102-b541-63d2822c215d" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.526955 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.527723 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8tj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-64469b487f-m9czv_openstack-operators(f27a3d01-fbc5-46d9-9c11-ef6c21ead605): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.528954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" podUID="f27a3d01-fbc5-46d9-9c11-ef6c21ead605" Feb 02 14:49:44 crc kubenswrapper[4869]: E0202 14:49:44.993618 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:f6042794464b8ad49246666befd3943cb3ca212334333c0f6fe7a56ff3f6c73f\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" podUID="f27a3d01-fbc5-46d9-9c11-ef6c21ead605" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.304072 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.304141 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.304195 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.305065 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:49:45 crc kubenswrapper[4869]: I0202 14:49:45.305137 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56" gracePeriod=600 Feb 02 14:49:46 crc kubenswrapper[4869]: I0202 14:49:46.000971 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56" exitCode=0 Feb 02 14:49:46 crc kubenswrapper[4869]: I0202 14:49:46.001039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56"} Feb 02 14:49:46 crc kubenswrapper[4869]: I0202 14:49:46.001106 4869 scope.go:117] "RemoveContainer" containerID="e04db51ca2875f7a230a2b63845187d4e2f287a30bbe2dbd2fa0c5a5d7d0a486" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.275888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.276423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.283282 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-webhook-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.283483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32aa6b38-d480-426c-a36c-4cf34c082e73-metrics-certs\") pod \"openstack-operator-controller-manager-58566f7c4b-mnxtb\" (UID: \"32aa6b38-d480-426c-a36c-4cf34c082e73\") " pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.576195 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-649np" Feb 02 14:49:48 crc kubenswrapper[4869]: I0202 14:49:48.583805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.915386 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.916849 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.917154 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 14:49:52 crc kubenswrapper[4869]: E0202 14:49:52.917181 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-cgj22" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.314516 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.349686 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.350033 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8j79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5644b66645-2chmz_openstack-operators(98a25bb6-75b1-49ad-8d7c-cc4e763470ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.377261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" podUID="98a25bb6-75b1-49ad-8d7c-cc4e763470ec" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.482477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") pod \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.482615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") pod \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.482741 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") pod \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\" (UID: \"ff654c3f-299a-4ca0-b9b0-ecd963f680c9\") " Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.484320 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities" (OuterVolumeSpecName: "utilities") pod "ff654c3f-299a-4ca0-b9b0-ecd963f680c9" (UID: "ff654c3f-299a-4ca0-b9b0-ecd963f680c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.489762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr" (OuterVolumeSpecName: "kube-api-access-bc2nr") pod "ff654c3f-299a-4ca0-b9b0-ecd963f680c9" (UID: "ff654c3f-299a-4ca0-b9b0-ecd963f680c9"). InnerVolumeSpecName "kube-api-access-bc2nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.508335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff654c3f-299a-4ca0-b9b0-ecd963f680c9" (UID: "ff654c3f-299a-4ca0-b9b0-ecd963f680c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.585422 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2nr\" (UniqueName: \"kubernetes.io/projected/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-kube-api-access-bc2nr\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.585464 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:53 crc kubenswrapper[4869]: I0202 14:49:53.585474 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff654c3f-299a-4ca0-b9b0-ecd963f680c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.853901 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.854165 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59rtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-djzsw_openstack-operators(6719d674-1dac-4af1-859b-ea6a2186a20a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:49:53 crc kubenswrapper[4869]: E0202 14:49:53.855375 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.071131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgj22" event={"ID":"ff654c3f-299a-4ca0-b9b0-ecd963f680c9","Type":"ContainerDied","Data":"34a6135c6d9cce7c37dc455df3519275e3b6866fffb9f04458808c6fea6ccae2"} Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.071237 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgj22" Feb 02 14:49:54 crc kubenswrapper[4869]: E0202 14:49:54.089485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:6b951a651861f6e805ceec19cad5a35a8dfe6fd9536acebd3c197ca4659d8a51\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" podUID="98a25bb6-75b1-49ad-8d7c-cc4e763470ec" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.132686 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.139849 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgj22"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.274567 4869 scope.go:117] "RemoveContainer" containerID="ed514d4fb92ee5ff5875f888ad6f83e1e90a8a51e7c926bb847c1c383ee95091" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.528857 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.587874 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl"] Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.828717 4869 scope.go:117] "RemoveContainer" containerID="292a8800f1074a89c8517ba7b2c39a8724252f08e7b9ac9c8fe944e9593cab13" Feb 02 14:49:54 crc kubenswrapper[4869]: I0202 14:49:54.898235 4869 scope.go:117] "RemoveContainer" containerID="eda72bcc55c95d316258cf868924e75f80c68e4d577ed22a50a3cec2426c387b" Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.111267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" event={"ID":"bd94e783-b3ec-4d7e-b669-98255f029da6","Type":"ContainerStarted","Data":"06f524340ca6f7602aa48458621e7c6091d0cf2fa45c25aee91a0ae804a14a5c"} Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.127285 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" event={"ID":"c0779518-9e33-43e3-b373-263d74fbbd0f","Type":"ContainerStarted","Data":"de215708ac9df5c372c8284f222ad9800dbe2a2e9010105836019917220bc997"} Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.355378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb"] Feb 02 14:49:55 crc kubenswrapper[4869]: W0202 14:49:55.425624 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32aa6b38_d480_426c_a36c_4cf34c082e73.slice/crio-ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d WatchSource:0}: Error finding container ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d: Status 404 returned error can't find the container with id ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d Feb 02 14:49:55 crc kubenswrapper[4869]: I0202 14:49:55.479307 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" path="/var/lib/kubelet/pods/ff654c3f-299a-4ca0-b9b0-ecd963f680c9/volumes" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.159523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" event={"ID":"ac2b0707-5906-40df-9457-06739f19df84","Type":"ContainerStarted","Data":"24a82b94e8a8fac36c907f81426cc483ab799fa2ad64b0536a54a3e4030f8ad2"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.161291 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.172124 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerStarted","Data":"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.200250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" event={"ID":"7af79025-a32d-4e73-9559-5991093e986a","Type":"ContainerStarted","Data":"38fab07f2a2003158ac96ea51832181cdb6a9619fc4e382bca67532616d594e0"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.200694 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.217193 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" podStartSLOduration=4.832949263 podStartE2EDuration="41.21716133s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.485599659 +0000 UTC m=+960.130236429" lastFinishedPulling="2026-02-02 14:49:54.869811726 +0000 UTC m=+996.514448496" observedRunningTime="2026-02-02 14:49:56.2114972 +0000 UTC m=+997.856133980" watchObservedRunningTime="2026-02-02 14:49:56.21716133 +0000 UTC m=+997.861798090" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.232487 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" event={"ID":"5ea40597-21e0-4548-ab09-e381dac894ef","Type":"ContainerStarted","Data":"72d46cf7e3cadf0e98acca38a77ae82eb13fd8479f0ded3ef72b99ad2ec9339f"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.232858 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.249666 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" event={"ID":"3b0cf904-7af8-4e57-a664-7e594e557445","Type":"ContainerStarted","Data":"56282af3e2af06a979f62d650cdf0f65404b47825edc532462ccca46466b9917"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.251852 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.252825 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk6t7" podStartSLOduration=6.171161966 podStartE2EDuration="39.252799172s" podCreationTimestamp="2026-02-02 14:49:17 +0000 UTC" firstStartedPulling="2026-02-02 14:49:20.750013259 +0000 UTC m=+962.394650029" lastFinishedPulling="2026-02-02 14:49:53.831650475 +0000 UTC m=+995.476287235" observedRunningTime="2026-02-02 14:49:56.240283562 +0000 UTC m=+997.884920332" watchObservedRunningTime="2026-02-02 14:49:56.252799172 +0000 UTC m=+997.897435942" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.279008 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" podStartSLOduration=4.783829074 podStartE2EDuration="41.27898095s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.362215792 +0000 UTC m=+960.006852562" lastFinishedPulling="2026-02-02 14:49:54.857367668 +0000 UTC m=+996.502004438" observedRunningTime="2026-02-02 14:49:56.276540769 +0000 UTC m=+997.921177539" watchObservedRunningTime="2026-02-02 14:49:56.27898095 +0000 UTC m=+997.923617720" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.280447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" event={"ID":"993dae41-359f-47f7-9a2a-38f7c97d49de","Type":"ContainerStarted","Data":"2e3b62b5604f6cc7141dea720abafa7d154d2db3a239a304ba52cb43a0df75a9"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.281147 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.322119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" event={"ID":"f605f0c6-e023-433b-8e78-373b32387809","Type":"ContainerStarted","Data":"dd22013abd5eb7835955913ca084fe8ff662493eb8f3bf76692b608f74a4912d"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.323178 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.361550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" event={"ID":"2dfa14d3-9496-44cb-948b-e4065a9930c8","Type":"ContainerStarted","Data":"5830f84959c97c617ff24abe5b6b4c7213bb98b0e1447fa18abc7da308f5b925"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.362192 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" podStartSLOduration=5.083036055 podStartE2EDuration="42.362166629s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.579882714 +0000 UTC m=+959.224519484" lastFinishedPulling="2026-02-02 14:49:54.859013288 +0000 UTC m=+996.503650058" observedRunningTime="2026-02-02 14:49:56.319127704 +0000 UTC m=+997.963764494" watchObservedRunningTime="2026-02-02 14:49:56.362166629 +0000 UTC m=+998.006803399" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.362709 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.365185 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" podStartSLOduration=4.599640084 podStartE2EDuration="42.365172144s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.105804484 +0000 UTC m=+958.750441264" lastFinishedPulling="2026-02-02 14:49:54.871336554 +0000 UTC m=+996.515973324" observedRunningTime="2026-02-02 14:49:56.363030451 +0000 UTC m=+998.007667251" watchObservedRunningTime="2026-02-02 14:49:56.365172144 +0000 UTC m=+998.009808914" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.369090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" event={"ID":"fc6638c4-5467-48c9-b725-284cd08372f6","Type":"ContainerStarted","Data":"2d0b907418dea9ffc40feceaf23d6e99adcbf632f08050a9b8429112104a314a"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.370016 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.382115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" event={"ID":"32aa6b38-d480-426c-a36c-4cf34c082e73","Type":"ContainerStarted","Data":"731c6e8f7adb05918c425d07d4f80cdea7fc3dc283ecaeb106b342883d620d25"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.382183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" event={"ID":"32aa6b38-d480-426c-a36c-4cf34c082e73","Type":"ContainerStarted","Data":"ea542c8cc320513ee44c3365b48091d8934e8ba065471b82e0de2380b1d9d42d"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.382211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.395555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" event={"ID":"f07dc950-121d-4a91-8489-dfc187196775","Type":"ContainerStarted","Data":"b948057143600ebda6a0fc622ad560559639317a9b5839a6e62523574793252b"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.396433 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.406546 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" event={"ID":"98a357a8-0e70-4f30-a41a-8dde25612a8a","Type":"ContainerStarted","Data":"0f2dcdd6d5e247c472d850cc8c16dfc20c8fe707fd699c510c5d617b8216258b"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.407505 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.419093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" event={"ID":"53467de5-c9d7-4aa0-973d-180c8cb84b27","Type":"ContainerStarted","Data":"eda99b8e20106d4f310f7cf46603d8e510a0a9993d0f155d80a4d2b65139eda1"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.420162 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.431236 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.452334 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" podStartSLOduration=5.848471494 podStartE2EDuration="41.452304021s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.486199245 +0000 UTC m=+960.130836015" lastFinishedPulling="2026-02-02 14:49:54.090031772 +0000 UTC m=+995.734668542" observedRunningTime="2026-02-02 14:49:56.432695145 +0000 UTC m=+998.077331915" watchObservedRunningTime="2026-02-02 14:49:56.452304021 +0000 UTC m=+998.096940791" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.462030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" event={"ID":"c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb","Type":"ContainerStarted","Data":"451f33842f029503a039ed91632b0e5da30bafa4937ad999206a0886ef62d501"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.463433 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.487131 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" podStartSLOduration=5.150984036 podStartE2EDuration="42.487109003s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.536077367 +0000 UTC m=+959.180714137" lastFinishedPulling="2026-02-02 14:49:54.872202334 +0000 UTC m=+996.516839104" observedRunningTime="2026-02-02 14:49:56.486370724 +0000 UTC m=+998.131007484" watchObservedRunningTime="2026-02-02 14:49:56.487109003 +0000 UTC m=+998.131745773" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.492672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" event={"ID":"cf357940-5e8d-4111-86e6-1fafd5e670cd","Type":"ContainerStarted","Data":"77c4701e54c8897d490b6c0e01b2ed81d1ece388868aac728c18685da9fafeb7"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.493609 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.524335 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" event={"ID":"77902d6e-ef76-42b0-a40c-0b51f383f580","Type":"ContainerStarted","Data":"6377928fd851051af58fc7bce4f72ee2e99e7bb65a58b9265d903aae7639a192"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.525404 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.544478 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" podStartSLOduration=5.240556453 podStartE2EDuration="42.544452652s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.554852993 +0000 UTC m=+959.199489753" lastFinishedPulling="2026-02-02 14:49:54.858749182 +0000 UTC m=+996.503385952" observedRunningTime="2026-02-02 14:49:56.535694685 +0000 UTC m=+998.180331455" watchObservedRunningTime="2026-02-02 14:49:56.544452652 +0000 UTC m=+998.189089422" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.559057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" event={"ID":"ad8b0f9a-67d7-4897-af4b-f344b3d1c502","Type":"ContainerStarted","Data":"7aad9305cd0b916f4f4cde15a0ef3b46620277c76519be9112e199231273258a"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.559842 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.574825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" event={"ID":"06f5e083-c0ea-4ad0-9a07-50707d84be61","Type":"ContainerStarted","Data":"a5e28465d91360550647c580503f315794c653372ab882ad7ea02655bf4b7fec"} Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.575209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.599511 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" podStartSLOduration=5.189612723 podStartE2EDuration="41.599475844s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.46220814 +0000 UTC m=+960.106844910" lastFinishedPulling="2026-02-02 14:49:54.872071261 +0000 UTC m=+996.516708031" observedRunningTime="2026-02-02 14:49:56.583853377 +0000 UTC m=+998.228490147" watchObservedRunningTime="2026-02-02 14:49:56.599475844 +0000 UTC m=+998.244112614" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.624470 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" podStartSLOduration=5.778576711 podStartE2EDuration="42.624438292s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.161674899 +0000 UTC m=+958.806311669" lastFinishedPulling="2026-02-02 14:49:54.00753648 +0000 UTC m=+995.652173250" observedRunningTime="2026-02-02 14:49:56.619487009 +0000 UTC m=+998.264123789" watchObservedRunningTime="2026-02-02 14:49:56.624438292 +0000 UTC m=+998.269075062" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.659837 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" podStartSLOduration=4.086570938 podStartE2EDuration="42.659812588s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:16.869073968 +0000 UTC m=+958.513710738" lastFinishedPulling="2026-02-02 14:49:55.442315618 +0000 UTC m=+997.086952388" observedRunningTime="2026-02-02 14:49:56.654463805 +0000 UTC m=+998.299100585" watchObservedRunningTime="2026-02-02 14:49:56.659812588 +0000 UTC m=+998.304449358" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.819814 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" podStartSLOduration=6.081796124 podStartE2EDuration="42.819781317s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.093502999 +0000 UTC m=+958.738139769" lastFinishedPulling="2026-02-02 14:49:53.831488192 +0000 UTC m=+995.476124962" observedRunningTime="2026-02-02 14:49:56.788989075 +0000 UTC m=+998.433625865" watchObservedRunningTime="2026-02-02 14:49:56.819781317 +0000 UTC m=+998.464418087" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.902095 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" podStartSLOduration=41.902061215 podStartE2EDuration="41.902061215s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:49:56.896722322 +0000 UTC m=+998.541359092" watchObservedRunningTime="2026-02-02 14:49:56.902061215 +0000 UTC m=+998.546697985" Feb 02 14:49:56 crc kubenswrapper[4869]: I0202 14:49:56.969468 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" podStartSLOduration=8.195866678 podStartE2EDuration="42.969440482s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.459282878 +0000 UTC m=+960.103919648" lastFinishedPulling="2026-02-02 14:49:53.232856682 +0000 UTC m=+994.877493452" observedRunningTime="2026-02-02 14:49:56.952646037 +0000 UTC m=+998.597282817" watchObservedRunningTime="2026-02-02 14:49:56.969440482 +0000 UTC m=+998.614077252" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.015801 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" podStartSLOduration=6.025455602 podStartE2EDuration="43.015773009s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.88079063 +0000 UTC m=+959.525427400" lastFinishedPulling="2026-02-02 14:49:54.871108037 +0000 UTC m=+996.515744807" observedRunningTime="2026-02-02 14:49:57.008827267 +0000 UTC m=+998.653464047" watchObservedRunningTime="2026-02-02 14:49:57.015773009 +0000 UTC m=+998.660409779" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.071810 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" podStartSLOduration=5.094530723 podStartE2EDuration="42.071770236s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.880601276 +0000 UTC m=+959.525238046" lastFinishedPulling="2026-02-02 14:49:54.857840789 +0000 UTC m=+996.502477559" observedRunningTime="2026-02-02 14:49:57.069848418 +0000 UTC m=+998.714485198" watchObservedRunningTime="2026-02-02 14:49:57.071770236 +0000 UTC m=+998.716407016" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.117732 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" podStartSLOduration=6.299221456 podStartE2EDuration="43.117706003s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.013019505 +0000 UTC m=+958.657656275" lastFinishedPulling="2026-02-02 14:49:53.831504052 +0000 UTC m=+995.476140822" observedRunningTime="2026-02-02 14:49:57.110064254 +0000 UTC m=+998.754701044" watchObservedRunningTime="2026-02-02 14:49:57.117706003 +0000 UTC m=+998.762342773" Feb 02 14:49:57 crc kubenswrapper[4869]: I0202 14:49:57.165198 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" podStartSLOduration=5.770881062 podStartE2EDuration="42.165168798s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.462367074 +0000 UTC m=+960.107003844" lastFinishedPulling="2026-02-02 14:49:54.85665481 +0000 UTC m=+996.501291580" observedRunningTime="2026-02-02 14:49:57.16447005 +0000 UTC m=+998.809106820" watchObservedRunningTime="2026-02-02 14:49:57.165168798 +0000 UTC m=+998.809805568" Feb 02 14:49:58 crc kubenswrapper[4869]: I0202 14:49:58.411324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:58 crc kubenswrapper[4869]: I0202 14:49:58.411936 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:49:59 crc kubenswrapper[4869]: I0202 14:49:59.496164 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mk6t7" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" probeResult="failure" output=< Feb 02 14:49:59 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 14:49:59 crc kubenswrapper[4869]: > Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.629996 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" event={"ID":"7e9b35b2-f20d-4102-b541-63d2822c215d","Type":"ContainerStarted","Data":"9afa6d86470cadb79b93ffcf2d0abb331307f18e0c01e30da96f6d3be9b43e96"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.631063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.633010 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" event={"ID":"c0779518-9e33-43e3-b373-263d74fbbd0f","Type":"ContainerStarted","Data":"0d8c1328ec52e73cdd86bacbcf24b06870f6941bbc722dcc462efc4260f2a7c5"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.633184 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.635369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" event={"ID":"bd94e783-b3ec-4d7e-b669-98255f029da6","Type":"ContainerStarted","Data":"2856fa3264e65b50d70e5ceb4a884aa822231c558fbda5aa40cf1b71f4891f80"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.635451 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.638544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" event={"ID":"f27a3d01-fbc5-46d9-9c11-ef6c21ead605","Type":"ContainerStarted","Data":"e2c1c344995f3d29f015c12574169aa6cfecda26a5618f318ba2bd092b4506ce"} Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.638776 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.660202 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" podStartSLOduration=4.477848141 podStartE2EDuration="47.660171138s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.461841801 +0000 UTC m=+960.106478571" lastFinishedPulling="2026-02-02 14:50:01.644164798 +0000 UTC m=+1003.288801568" observedRunningTime="2026-02-02 14:50:02.653389941 +0000 UTC m=+1004.298026711" watchObservedRunningTime="2026-02-02 14:50:02.660171138 +0000 UTC m=+1004.304807908" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.678733 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" podStartSLOduration=41.893551309 podStartE2EDuration="48.678708598s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:54.858611329 +0000 UTC m=+996.503248099" lastFinishedPulling="2026-02-02 14:50:01.643768618 +0000 UTC m=+1003.288405388" observedRunningTime="2026-02-02 14:50:02.678079012 +0000 UTC m=+1004.322715782" watchObservedRunningTime="2026-02-02 14:50:02.678708598 +0000 UTC m=+1004.323345368" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.718169 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" podStartSLOduration=40.933572909 podStartE2EDuration="47.718140144s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:54.858451544 +0000 UTC m=+996.503088304" lastFinishedPulling="2026-02-02 14:50:01.643018779 +0000 UTC m=+1003.287655539" observedRunningTime="2026-02-02 14:50:02.713465749 +0000 UTC m=+1004.358102529" watchObservedRunningTime="2026-02-02 14:50:02.718140144 +0000 UTC m=+1004.362776914" Feb 02 14:50:02 crc kubenswrapper[4869]: I0202 14:50:02.738418 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" podStartSLOduration=4.973885551 podStartE2EDuration="48.738392645s" podCreationTimestamp="2026-02-02 14:49:14 +0000 UTC" firstStartedPulling="2026-02-02 14:49:17.880015822 +0000 UTC m=+959.524652602" lastFinishedPulling="2026-02-02 14:50:01.644522926 +0000 UTC m=+1003.289159696" observedRunningTime="2026-02-02 14:50:02.731595357 +0000 UTC m=+1004.376232127" watchObservedRunningTime="2026-02-02 14:50:02.738392645 +0000 UTC m=+1004.383029415" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.195133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-8f4c5cb64-pbxmj" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.214739 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5d77f4dbc9-qmt77" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.292209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-cpjjt" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.298427 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-65dc6c8d9c-9ph7x" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.394549 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-fc589b45f-28mqn" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.399054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-85899c864d-4cnfc" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.531030 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-87bd9d46f-762xj" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.568804 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7775d87d9d-l2b72" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.599341 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-hpnsb" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.915324 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-576995988b-swhqr" Feb 02 14:50:05 crc kubenswrapper[4869]: I0202 14:50:05.993735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28zx5" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.028425 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6vnjh" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.079787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7b89fdf75b-zdwh8" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.361896 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-565849b54-fm2kj" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.427813 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ntthk" Feb 02 14:50:06 crc kubenswrapper[4869]: I0202 14:50:06.453790 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-586b95b788-9fsf5" Feb 02 14:50:08 crc kubenswrapper[4869]: E0202 14:50:08.464383 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podUID="6719d674-1dac-4af1-859b-ea6a2186a20a" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.484144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.542020 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.590812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58566f7c4b-mnxtb" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.733715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" event={"ID":"98a25bb6-75b1-49ad-8d7c-cc4e763470ec","Type":"ContainerStarted","Data":"138c732146319f66b14ff469591dab73126474a5491388391d962553666c79e2"} Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.734893 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.751030 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:50:08 crc kubenswrapper[4869]: I0202 14:50:08.780733 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" podStartSLOduration=4.322671253 podStartE2EDuration="53.780703305s" podCreationTimestamp="2026-02-02 14:49:15 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.130590942 +0000 UTC m=+959.775227712" lastFinishedPulling="2026-02-02 14:50:07.588622994 +0000 UTC m=+1009.233259764" observedRunningTime="2026-02-02 14:50:08.774611294 +0000 UTC m=+1010.419248085" watchObservedRunningTime="2026-02-02 14:50:08.780703305 +0000 UTC m=+1010.425340075" Feb 02 14:50:09 crc kubenswrapper[4869]: I0202 14:50:09.740553 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk6t7" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" containerID="cri-o://4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" gracePeriod=2 Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.153721 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.225642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") pod \"c8bef13a-7759-4c87-be0b-09017f74f36e\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.226042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") pod \"c8bef13a-7759-4c87-be0b-09017f74f36e\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.226078 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") pod \"c8bef13a-7759-4c87-be0b-09017f74f36e\" (UID: \"c8bef13a-7759-4c87-be0b-09017f74f36e\") " Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.227159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities" (OuterVolumeSpecName: "utilities") pod "c8bef13a-7759-4c87-be0b-09017f74f36e" (UID: "c8bef13a-7759-4c87-be0b-09017f74f36e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.232468 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5" (OuterVolumeSpecName: "kube-api-access-22zp5") pod "c8bef13a-7759-4c87-be0b-09017f74f36e" (UID: "c8bef13a-7759-4c87-be0b-09017f74f36e"). InnerVolumeSpecName "kube-api-access-22zp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.282501 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8bef13a-7759-4c87-be0b-09017f74f36e" (UID: "c8bef13a-7759-4c87-be0b-09017f74f36e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.328390 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.328433 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8bef13a-7759-4c87-be0b-09017f74f36e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.328446 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zp5\" (UniqueName: \"kubernetes.io/projected/c8bef13a-7759-4c87-be0b-09017f74f36e-kube-api-access-22zp5\") on node \"crc\" DevicePath \"\"" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749848 4869 generic.go:334] "Generic (PLEG): container finished" podID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" exitCode=0 Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345"} Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk6t7" event={"ID":"c8bef13a-7759-4c87-be0b-09017f74f36e","Type":"ContainerDied","Data":"b98787b47532515aada795b4ad2399e98d871050306303546e73bd06745bd50a"} Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.749994 4869 scope.go:117] "RemoveContainer" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.750111 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk6t7" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.787390 4869 scope.go:117] "RemoveContainer" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.793832 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.801600 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk6t7"] Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.811356 4869 scope.go:117] "RemoveContainer" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.841365 4869 scope.go:117] "RemoveContainer" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" Feb 02 14:50:10 crc kubenswrapper[4869]: E0202 14:50:10.848219 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345\": container with ID starting with 4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345 not found: ID does not exist" containerID="4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.848602 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345"} err="failed to get container status \"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345\": rpc error: code = NotFound desc = could not find container \"4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345\": container with ID starting with 4369ade3c5041faed768d7de75db41cee95af508c754ffd7cf7a2a056db4f345 not found: ID does not exist" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.848838 4869 scope.go:117] "RemoveContainer" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" Feb 02 14:50:10 crc kubenswrapper[4869]: E0202 14:50:10.849681 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b\": container with ID starting with 22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b not found: ID does not exist" containerID="22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.849762 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b"} err="failed to get container status \"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b\": rpc error: code = NotFound desc = could not find container \"22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b\": container with ID starting with 22c6e0b7905404723db7bf8586a6baa903ff88027ccf81e8d7db44166b84911b not found: ID does not exist" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.849810 4869 scope.go:117] "RemoveContainer" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" Feb 02 14:50:10 crc kubenswrapper[4869]: E0202 14:50:10.850535 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f\": container with ID starting with 5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f not found: ID does not exist" containerID="5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f" Feb 02 14:50:10 crc kubenswrapper[4869]: I0202 14:50:10.850639 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f"} err="failed to get container status \"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f\": rpc error: code = NotFound desc = could not find container \"5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f\": container with ID starting with 5f5993569a8bd4133d8bc44f3909aa1d5e8663649a8cab020c10cb2c94e8058f not found: ID does not exist" Feb 02 14:50:11 crc kubenswrapper[4869]: I0202 14:50:11.106507 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b4jxj" Feb 02 14:50:11 crc kubenswrapper[4869]: I0202 14:50:11.473864 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" path="/var/lib/kubelet/pods/c8bef13a-7759-4c87-be0b-09017f74f36e/volumes" Feb 02 14:50:11 crc kubenswrapper[4869]: I0202 14:50:11.834847 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl" Feb 02 14:50:15 crc kubenswrapper[4869]: I0202 14:50:15.543949 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-64469b487f-m9czv" Feb 02 14:50:15 crc kubenswrapper[4869]: I0202 14:50:15.849060 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5644b66645-2chmz" Feb 02 14:50:15 crc kubenswrapper[4869]: I0202 14:50:15.993437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7b89ddb58-h2kl2" Feb 02 14:50:22 crc kubenswrapper[4869]: I0202 14:50:22.848503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" event={"ID":"6719d674-1dac-4af1-859b-ea6a2186a20a","Type":"ContainerStarted","Data":"f3b2e3dd4df40af0a6a4b4a46f04abd41944c447c6f5fedd7aad5ac45c56f1af"} Feb 02 14:50:22 crc kubenswrapper[4869]: I0202 14:50:22.868797 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-djzsw" podStartSLOduration=3.257716014 podStartE2EDuration="1m6.868772258s" podCreationTimestamp="2026-02-02 14:49:16 +0000 UTC" firstStartedPulling="2026-02-02 14:49:18.494977962 +0000 UTC m=+960.139614732" lastFinishedPulling="2026-02-02 14:50:22.106034206 +0000 UTC m=+1023.750670976" observedRunningTime="2026-02-02 14:50:22.868695576 +0000 UTC m=+1024.513332366" watchObservedRunningTime="2026-02-02 14:50:22.868772258 +0000 UTC m=+1024.513409028" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.199933 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202221 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202253 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202374 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202420 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202431 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202442 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202450 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202462 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202472 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="extract-content" Feb 02 14:50:40 crc kubenswrapper[4869]: E0202 14:50:40.202486 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.202495 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="extract-utilities" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.210538 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8bef13a-7759-4c87-be0b-09017f74f36e" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.210656 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff654c3f-299a-4ca0-b9b0-ecd963f680c9" containerName="registry-server" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.212029 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.221642 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.223086 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tzlk5" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.223214 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.220551 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.227351 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.255952 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.262027 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.267613 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.282789 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342863 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.342962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444667 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.444885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.446725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.446832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.447643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.475020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"dnsmasq-dns-78dd6ddcc-k2kfn\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.475068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"dnsmasq-dns-675f4bcbfc-q69j4\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.547534 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:50:40 crc kubenswrapper[4869]: I0202 14:50:40.594693 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.090378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.157095 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:41 crc kubenswrapper[4869]: W0202 14:50:41.159899 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6166bb6a_5dce_4f45_8e72_80a8677451c1.slice/crio-47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64 WatchSource:0}: Error finding container 47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64: Status 404 returned error can't find the container with id 47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64 Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.991741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" event={"ID":"ffb6a700-f36f-4bad-a670-532f64d03e8d","Type":"ContainerStarted","Data":"40d283a23f15f072a351872ebd571e334c5a19ad9297f4d284e98ceadfa0347a"} Feb 02 14:50:41 crc kubenswrapper[4869]: I0202 14:50:41.992896 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" event={"ID":"6166bb6a-5dce-4f45-8e72-80a8677451c1","Type":"ContainerStarted","Data":"47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64"} Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.121224 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.154938 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.157377 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.191042 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.347373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.347580 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.347756 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.451065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.451136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.451171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.452749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.453003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.489082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"dnsmasq-dns-666b6646f7-hlvlp\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.514044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.613799 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.657775 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.660028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.685065 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.757423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.757539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.758271 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.861258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.861317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.861418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.862720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.863140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:43 crc kubenswrapper[4869]: I0202 14:50:43.918693 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"dnsmasq-dns-57d769cc4f-xjhxx\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.010939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.229500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:50:44 crc kubenswrapper[4869]: W0202 14:50:44.237677 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f2e276_a4a3_4992_aadc_e6e4e259feea.slice/crio-71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a WatchSource:0}: Error finding container 71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a: Status 404 returned error can't find the container with id 71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.454693 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.456620 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462221 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462367 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gjvp4" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462386 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462655 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.462928 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.463022 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.463107 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581275 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581640 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.581691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.582539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.680847 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684810 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.684991 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.685014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.685043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.685059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.688064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.688876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.689214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.691218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.691509 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.693089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.693281 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.695607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.699946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.700759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.708978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.723371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.856704 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.857303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.861858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.871520 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.871669 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.875444 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.875881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gtj7h" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.876418 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.876546 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.876678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.884133 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.991993 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:44 crc kubenswrapper[4869]: I0202 14:50:44.992280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.054195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" event={"ID":"84f2e276-a4a3-4992-aadc-e6e4e259feea","Type":"ContainerStarted","Data":"71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a"} Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.056053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" event={"ID":"8b641090-1ff7-4058-9633-de20ec70c671","Type":"ContainerStarted","Data":"29623a0a20d0d3f426297d37f9c2d0abf87beb1dfbc32ce1bbed40778e70b8b2"} Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.093996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094171 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.094451 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.096457 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.096612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.097610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.097946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.098400 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.098685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.099403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.100694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.103627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.104310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.117682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.152747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.211982 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.546061 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: W0202 14:50:45.559285 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb339c96d_7eb1_4359_bcc3_6853622d5aa6.slice/crio-71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b WatchSource:0}: Error finding container 71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b: Status 404 returned error can't find the container with id 71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.688090 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.712702 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.712964 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.719467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4zkj9" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.720131 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.727559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.750762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.761365 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.825090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826586 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcft5\" (UniqueName: \"kubernetes.io/projected/0db20771-eb71-4272-9814-ab5bf0fff1fe-kube-api-access-fcft5\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.826892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.891705 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcft5\" (UniqueName: \"kubernetes.io/projected/0db20771-eb71-4272-9814-ab5bf0fff1fe-kube-api-access-fcft5\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928397 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.928873 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.929525 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.931474 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.935642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.938660 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0db20771-eb71-4272-9814-ab5bf0fff1fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.946495 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: I0202 14:50:45.948613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db20771-eb71-4272-9814-ab5bf0fff1fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:45 crc kubenswrapper[4869]: W0202 14:50:45.965381 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95035071_a194_40ba_9b64_700ae3121dc4.slice/crio-4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965 WatchSource:0}: Error finding container 4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965: Status 404 returned error can't find the container with id 4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965 Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.002808 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.015115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcft5\" (UniqueName: \"kubernetes.io/projected/0db20771-eb71-4272-9814-ab5bf0fff1fe-kube-api-access-fcft5\") pod \"openstack-galera-0\" (UID: \"0db20771-eb71-4272-9814-ab5bf0fff1fe\") " pod="openstack/openstack-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.077290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.104506 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerStarted","Data":"71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b"} Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.120037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerStarted","Data":"4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965"} Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.796590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.931317 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.937348 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.948798 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-llsf5" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.949122 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.949349 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.949563 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 14:50:46 crc kubenswrapper[4869]: I0202 14:50:46.960517 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.062858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44h8g\" (UniqueName: \"kubernetes.io/projected/4287f1a9-b523-48a9-a999-fc8f34b212a4-kube-api-access-44h8g\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063303 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.063564 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.065410 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.066685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.077774 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.080637 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fz6fg" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.081047 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.089331 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/1078d20a-9d7e-45ef-8be5-bade239489c4-kube-api-access-7786r\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165784 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165805 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.165988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-config-data\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44h8g\" (UniqueName: \"kubernetes.io/projected/4287f1a9-b523-48a9-a999-fc8f34b212a4-kube-api-access-44h8g\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.166260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-kolla-config\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.167947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.168153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.168401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.168732 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.171624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287f1a9-b523-48a9-a999-fc8f34b212a4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.207803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44h8g\" (UniqueName: \"kubernetes.io/projected/4287f1a9-b523-48a9-a999-fc8f34b212a4-kube-api-access-44h8g\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.207830 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.222089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4287f1a9-b523-48a9-a999-fc8f34b212a4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.226134 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"4287f1a9-b523-48a9-a999-fc8f34b212a4\") " pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271523 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/1078d20a-9d7e-45ef-8be5-bade239489c4-kube-api-access-7786r\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-config-data\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-kolla-config\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.271842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.273803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-config-data\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.274121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1078d20a-9d7e-45ef-8be5-bade239489c4-kolla-config\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.277736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.291899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1078d20a-9d7e-45ef-8be5-bade239489c4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.296542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7786r\" (UniqueName: \"kubernetes.io/projected/1078d20a-9d7e-45ef-8be5-bade239489c4-kube-api-access-7786r\") pod \"memcached-0\" (UID: \"1078d20a-9d7e-45ef-8be5-bade239489c4\") " pod="openstack/memcached-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.298697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 14:50:47 crc kubenswrapper[4869]: I0202 14:50:47.410650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.800305 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.858200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.870642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-77gm6" Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.905112 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:50:48 crc kubenswrapper[4869]: I0202 14:50:48.945395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"kube-state-metrics-0\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " pod="openstack/kube-state-metrics-0" Feb 02 14:50:49 crc kubenswrapper[4869]: I0202 14:50:49.047052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"kube-state-metrics-0\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " pod="openstack/kube-state-metrics-0" Feb 02 14:50:49 crc kubenswrapper[4869]: I0202 14:50:49.079996 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"kube-state-metrics-0\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " pod="openstack/kube-state-metrics-0" Feb 02 14:50:49 crc kubenswrapper[4869]: I0202 14:50:49.236036 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.519478 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f7z74"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.521200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.531716 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5nxjc" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.531887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.531743 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.532932 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.592785 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bd7dt"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.599974 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.611150 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bd7dt"] Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629022 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-combined-ca-bundle\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629415 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629532 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-etc-ovs\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d51425d7-d30c-466d-b478-17a637e3ef9f-scripts\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-log\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.629998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79eb9544-e5e9-455c-94ca-bb36fa6eb873-scripts\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630108 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95b7\" (UniqueName: \"kubernetes.io/projected/79eb9544-e5e9-455c-94ca-bb36fa6eb873-kube-api-access-c95b7\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630233 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-run\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsqbr\" (UniqueName: \"kubernetes.io/projected/d51425d7-d30c-466d-b478-17a637e3ef9f-kube-api-access-nsqbr\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-ovn-controller-tls-certs\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-lib\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.630877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-log-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d51425d7-d30c-466d-b478-17a637e3ef9f-scripts\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-log\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79eb9544-e5e9-455c-94ca-bb36fa6eb873-scripts\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732803 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95b7\" (UniqueName: \"kubernetes.io/projected/79eb9544-e5e9-455c-94ca-bb36fa6eb873-kube-api-access-c95b7\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-run\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsqbr\" (UniqueName: \"kubernetes.io/projected/d51425d7-d30c-466d-b478-17a637e3ef9f-kube-api-access-nsqbr\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-ovn-controller-tls-certs\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732968 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.732994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-lib\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733042 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-log-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-combined-ca-bundle\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-etc-ovs\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.733856 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-etc-ovs\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735322 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-log-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-run\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735471 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-lib\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d51425d7-d30c-466d-b478-17a637e3ef9f-var-run-ovn\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.735504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/79eb9544-e5e9-455c-94ca-bb36fa6eb873-var-log\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.737761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79eb9544-e5e9-455c-94ca-bb36fa6eb873-scripts\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.738018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d51425d7-d30c-466d-b478-17a637e3ef9f-scripts\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.743359 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-combined-ca-bundle\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.745587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51425d7-d30c-466d-b478-17a637e3ef9f-ovn-controller-tls-certs\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.756618 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95b7\" (UniqueName: \"kubernetes.io/projected/79eb9544-e5e9-455c-94ca-bb36fa6eb873-kube-api-access-c95b7\") pod \"ovn-controller-ovs-bd7dt\" (UID: \"79eb9544-e5e9-455c-94ca-bb36fa6eb873\") " pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.757515 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsqbr\" (UniqueName: \"kubernetes.io/projected/d51425d7-d30c-466d-b478-17a637e3ef9f-kube-api-access-nsqbr\") pod \"ovn-controller-f7z74\" (UID: \"d51425d7-d30c-466d-b478-17a637e3ef9f\") " pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.846604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74" Feb 02 14:50:52 crc kubenswrapper[4869]: I0202 14:50:52.926713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.239049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerStarted","Data":"3bd6013ab427605f751d6d5e88cdfa9e6c7d0a76361b78cacc0f93508f5f1596"} Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.363113 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.364558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.370805 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.371317 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.371621 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.371780 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kj4w2" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.373485 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.392571 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.446392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.446588 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.446872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.447020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-config\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.447117 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9r8\" (UniqueName: \"kubernetes.io/projected/208fe19b-f03b-4a68-b6f2-f9dc3783239e-kube-api-access-8z9r8\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.447160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.449366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.449519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551349 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-config\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551382 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z9r8\" (UniqueName: \"kubernetes.io/projected/208fe19b-f03b-4a68-b6f2-f9dc3783239e-kube-api-access-8z9r8\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.551901 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.552003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.553140 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-config\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.553367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/208fe19b-f03b-4a68-b6f2-f9dc3783239e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.557779 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.557836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.570243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/208fe19b-f03b-4a68-b6f2-f9dc3783239e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.591839 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.603062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z9r8\" (UniqueName: \"kubernetes.io/projected/208fe19b-f03b-4a68-b6f2-f9dc3783239e-kube-api-access-8z9r8\") pod \"ovsdbserver-nb-0\" (UID: \"208fe19b-f03b-4a68-b6f2-f9dc3783239e\") " pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:53 crc kubenswrapper[4869]: I0202 14:50:53.711028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.122848 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.125744 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.131891 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hz4lj" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.131939 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.132053 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.131975 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.141631 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.202968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203043 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203071 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v74v\" (UniqueName: \"kubernetes.io/projected/c9a1c388-0473-4284-9a2c-09e3d97858f2-kube-api-access-9v74v\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203127 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.203255 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305510 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v74v\" (UniqueName: \"kubernetes.io/projected/c9a1c388-0473-4284-9a2c-09e3d97858f2-kube-api-access-9v74v\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305678 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.305793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.306302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.306677 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.307113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.307446 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a1c388-0473-4284-9a2c-09e3d97858f2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.312810 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.315856 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.325946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v74v\" (UniqueName: \"kubernetes.io/projected/c9a1c388-0473-4284-9a2c-09e3d97858f2-kube-api-access-9v74v\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.326752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a1c388-0473-4284-9a2c-09e3d97858f2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.341585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9a1c388-0473-4284-9a2c-09e3d97858f2\") " pod="openstack/ovsdbserver-sb-0" Feb 02 14:50:56 crc kubenswrapper[4869]: I0202 14:50:56.460713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:03 crc kubenswrapper[4869]: E0202 14:51:03.705840 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 02 14:51:03 crc kubenswrapper[4869]: E0202 14:51:03.707575 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfjdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b339c96d-7eb1-4359-bcc3-6853622d5aa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:03 crc kubenswrapper[4869]: E0202 14:51:03.709167 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" Feb 02 14:51:04 crc kubenswrapper[4869]: E0202 14:51:04.341025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" Feb 02 14:51:11 crc kubenswrapper[4869]: I0202 14:51:11.534774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.075546 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.075798 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-xjhxx_openstack(8b641090-1ff7-4058-9633-de20ec70c671): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.077014 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" podUID="8b641090-1ff7-4058-9633-de20ec70c671" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.124339 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.124516 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h22v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-q69j4_openstack(ffb6a700-f36f-4bad-a670-532f64d03e8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.125662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" podUID="ffb6a700-f36f-4bad-a670-532f64d03e8d" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.156351 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.156594 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbmfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-k2kfn_openstack(6166bb6a-5dce-4f45-8e72-80a8677451c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.160335 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" podUID="6166bb6a-5dce-4f45-8e72-80a8677451c1" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.206017 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.206882 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scf4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-hlvlp_openstack(84f2e276-a4a3-4992-aadc-e6e4e259feea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.208409 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" podUID="84f2e276-a4a3-4992-aadc-e6e4e259feea" Feb 02 14:51:12 crc kubenswrapper[4869]: I0202 14:51:12.433738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1078d20a-9d7e-45ef-8be5-bade239489c4","Type":"ContainerStarted","Data":"0742d987bd520eb5b5410dfa68de7b74a894c31587c7a99077474008abe77c17"} Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.437040 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" podUID="84f2e276-a4a3-4992-aadc-e6e4e259feea" Feb 02 14:51:12 crc kubenswrapper[4869]: E0202 14:51:12.437297 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" podUID="8b641090-1ff7-4058-9633-de20ec70c671" Feb 02 14:51:12 crc kubenswrapper[4869]: I0202 14:51:12.638793 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.079097 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.115718 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 14:51:13 crc kubenswrapper[4869]: W0202 14:51:13.123109 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d7887e_0487_4179_a0af_6f51b9eed8e7.slice/crio-be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103 WatchSource:0}: Error finding container be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103: Status 404 returned error can't find the container with id be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103 Feb 02 14:51:13 crc kubenswrapper[4869]: W0202 14:51:13.130573 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4287f1a9_b523_48a9_a999_fc8f34b212a4.slice/crio-1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f WatchSource:0}: Error finding container 1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f: Status 404 returned error can't find the container with id 1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.221574 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.246321 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 14:51:13 crc kubenswrapper[4869]: W0202 14:51:13.278568 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a1c388_0473_4284_9a2c_09e3d97858f2.slice/crio-a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e WatchSource:0}: Error finding container a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e: Status 404 returned error can't find the container with id a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.376743 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.409291 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") pod \"ffb6a700-f36f-4bad-a670-532f64d03e8d\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.409382 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") pod \"ffb6a700-f36f-4bad-a670-532f64d03e8d\" (UID: \"ffb6a700-f36f-4bad-a670-532f64d03e8d\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.409927 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config" (OuterVolumeSpecName: "config") pod "ffb6a700-f36f-4bad-a670-532f64d03e8d" (UID: "ffb6a700-f36f-4bad-a670-532f64d03e8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.452161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74" event={"ID":"d51425d7-d30c-466d-b478-17a637e3ef9f","Type":"ContainerStarted","Data":"b8aa905f4aa320d22c75d46051742b044332d353c2bb5cac09622ca7bb44d496"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.455616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9a1c388-0473-4284-9a2c-09e3d97858f2","Type":"ContainerStarted","Data":"a695105036b50c8f1c1e36fca961ffa7455e615d2dcaf2df126de3cbe6b0272e"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.458379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerStarted","Data":"1f043f93bdd75692e3778bb3515619f7b78ac6456cb11303903caa9aa52d1f13"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.460365 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerStarted","Data":"be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.461596 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.463867 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.475785 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v" (OuterVolumeSpecName: "kube-api-access-7h22v") pod "ffb6a700-f36f-4bad-a670-532f64d03e8d" (UID: "ffb6a700-f36f-4bad-a670-532f64d03e8d"). InnerVolumeSpecName "kube-api-access-7h22v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.513487 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") pod \"6166bb6a-5dce-4f45-8e72-80a8677451c1\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.513549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") pod \"6166bb6a-5dce-4f45-8e72-80a8677451c1\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.513673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") pod \"6166bb6a-5dce-4f45-8e72-80a8677451c1\" (UID: \"6166bb6a-5dce-4f45-8e72-80a8677451c1\") " Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514313 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6166bb6a-5dce-4f45-8e72-80a8677451c1" (UID: "6166bb6a-5dce-4f45-8e72-80a8677451c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514539 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb6a700-f36f-4bad-a670-532f64d03e8d-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514564 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h22v\" (UniqueName: \"kubernetes.io/projected/ffb6a700-f36f-4bad-a670-532f64d03e8d-kube-api-access-7h22v\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.514961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config" (OuterVolumeSpecName: "config") pod "6166bb6a-5dce-4f45-8e72-80a8677451c1" (UID: "6166bb6a-5dce-4f45-8e72-80a8677451c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.550112 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q69j4" event={"ID":"ffb6a700-f36f-4bad-a670-532f64d03e8d","Type":"ContainerDied","Data":"40d283a23f15f072a351872ebd571e334c5a19ad9297f4d284e98ceadfa0347a"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.550182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-k2kfn" event={"ID":"6166bb6a-5dce-4f45-8e72-80a8677451c1","Type":"ContainerDied","Data":"47354be68badf1fa7e0079595b392c49b2b5801c8ff1e25f49e089cb7cd87f64"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.550201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerStarted","Data":"1235f102623269e036d7b19ec04050e25397b702ec633308ba14497ff8a8a44f"} Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.576628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx" (OuterVolumeSpecName: "kube-api-access-cbmfx") pod "6166bb6a-5dce-4f45-8e72-80a8677451c1" (UID: "6166bb6a-5dce-4f45-8e72-80a8677451c1"). InnerVolumeSpecName "kube-api-access-cbmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.618829 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.618878 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbmfx\" (UniqueName: \"kubernetes.io/projected/6166bb6a-5dce-4f45-8e72-80a8677451c1-kube-api-access-cbmfx\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.618950 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6166bb6a-5dce-4f45-8e72-80a8677451c1-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.863921 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.872188 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q69j4"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.887336 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:51:13 crc kubenswrapper[4869]: I0202 14:51:13.900362 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-k2kfn"] Feb 02 14:51:13 crc kubenswrapper[4869]: E0202 14:51:13.972305 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6166bb6a_5dce_4f45_8e72_80a8677451c1.slice\": RecentStats: unable to find data in memory cache]" Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.129613 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.267875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bd7dt"] Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.496868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerStarted","Data":"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93"} Feb 02 14:51:14 crc kubenswrapper[4869]: I0202 14:51:14.507016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerStarted","Data":"afb1cbeab983d6b4b46ae44495de0b332c18b10393223bd85665c1538577edab"} Feb 02 14:51:15 crc kubenswrapper[4869]: W0202 14:51:15.134967 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod208fe19b_f03b_4a68_b6f2_f9dc3783239e.slice/crio-85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab WatchSource:0}: Error finding container 85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab: Status 404 returned error can't find the container with id 85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.473332 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6166bb6a-5dce-4f45-8e72-80a8677451c1" path="/var/lib/kubelet/pods/6166bb6a-5dce-4f45-8e72-80a8677451c1/volumes" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.474123 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb6a700-f36f-4bad-a670-532f64d03e8d" path="/var/lib/kubelet/pods/ffb6a700-f36f-4bad-a670-532f64d03e8d/volumes" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.513982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"208fe19b-f03b-4a68-b6f2-f9dc3783239e","Type":"ContainerStarted","Data":"85c3f98a07f875e0440411e6a9fe0b4c999af39c47897fc60fc2fe822ac894ab"} Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.515784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerStarted","Data":"1c23fbdda4e59536fefeaef67eb5d8febb2087bd572cafb12a5a3ea2fe0c0860"} Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.815723 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sr5dv"] Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.817371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.821268 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.854208 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sr5dv"] Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovs-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-combined-ca-bundle\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovn-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4lgs\" (UniqueName: \"kubernetes.io/projected/2b612893-5e70-472a-a65f-0d0c66f82de3-kube-api-access-n4lgs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.904724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b612893-5e70-472a-a65f-0d0c66f82de3-config\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:15 crc kubenswrapper[4869]: I0202 14:51:15.998171 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovs-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006889 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-combined-ca-bundle\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovn-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.006994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4lgs\" (UniqueName: \"kubernetes.io/projected/2b612893-5e70-472a-a65f-0d0c66f82de3-kube-api-access-n4lgs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.007032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b612893-5e70-472a-a65f-0d0c66f82de3-config\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.008019 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b612893-5e70-472a-a65f-0d0c66f82de3-config\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.010440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovn-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.010533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2b612893-5e70-472a-a65f-0d0c66f82de3-ovs-rundir\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.018328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-combined-ca-bundle\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.035192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b612893-5e70-472a-a65f-0d0c66f82de3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.040378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.042376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.052236 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4lgs\" (UniqueName: \"kubernetes.io/projected/2b612893-5e70-472a-a65f-0d0c66f82de3-kube-api-access-n4lgs\") pod \"ovn-controller-metrics-sr5dv\" (UID: \"2b612893-5e70-472a-a65f-0d0c66f82de3\") " pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.052274 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120045 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.120705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.139697 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.147332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sr5dv" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.229694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.230239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.231634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.236192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.282073 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"dnsmasq-dns-5bf47b49b7-frtgm\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.323338 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.357235 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.359582 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.362539 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.388146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.422800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442381 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442551 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.442620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.526546 4869 generic.go:334] "Generic (PLEG): container finished" podID="0db20771-eb71-4272-9814-ab5bf0fff1fe" containerID="1f043f93bdd75692e3778bb3515619f7b78ac6456cb11303903caa9aa52d1f13" exitCode=0 Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.526594 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerDied","Data":"1f043f93bdd75692e3778bb3515619f7b78ac6456cb11303903caa9aa52d1f13"} Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.545708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.547740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.547754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.548496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.550424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.580082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"dnsmasq-dns-8554648995-4c4vl\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:16 crc kubenswrapper[4869]: I0202 14:51:16.705967 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.402782 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.472776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") pod \"8b641090-1ff7-4058-9633-de20ec70c671\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.472861 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") pod \"8b641090-1ff7-4058-9633-de20ec70c671\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.473150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") pod \"8b641090-1ff7-4058-9633-de20ec70c671\" (UID: \"8b641090-1ff7-4058-9633-de20ec70c671\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.473520 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b641090-1ff7-4058-9633-de20ec70c671" (UID: "8b641090-1ff7-4058-9633-de20ec70c671"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.475284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config" (OuterVolumeSpecName: "config") pod "8b641090-1ff7-4058-9633-de20ec70c671" (UID: "8b641090-1ff7-4058-9633-de20ec70c671"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.476397 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.476423 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b641090-1ff7-4058-9633-de20ec70c671-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.476696 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c" (OuterVolumeSpecName: "kube-api-access-2fd4c") pod "8b641090-1ff7-4058-9633-de20ec70c671" (UID: "8b641090-1ff7-4058-9633-de20ec70c671"). InnerVolumeSpecName "kube-api-access-2fd4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.542621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" event={"ID":"8b641090-1ff7-4058-9633-de20ec70c671","Type":"ContainerDied","Data":"29623a0a20d0d3f426297d37f9c2d0abf87beb1dfbc32ce1bbed40778e70b8b2"} Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.542729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xjhxx" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.585109 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fd4c\" (UniqueName: \"kubernetes.io/projected/8b641090-1ff7-4058-9633-de20ec70c671-kube-api-access-2fd4c\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.594966 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.605926 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xjhxx"] Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.837849 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") pod \"84f2e276-a4a3-4992-aadc-e6e4e259feea\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") pod \"84f2e276-a4a3-4992-aadc-e6e4e259feea\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") pod \"84f2e276-a4a3-4992-aadc-e6e4e259feea\" (UID: \"84f2e276-a4a3-4992-aadc-e6e4e259feea\") " Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config" (OuterVolumeSpecName: "config") pod "84f2e276-a4a3-4992-aadc-e6e4e259feea" (UID: "84f2e276-a4a3-4992-aadc-e6e4e259feea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.993895 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84f2e276-a4a3-4992-aadc-e6e4e259feea" (UID: "84f2e276-a4a3-4992-aadc-e6e4e259feea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:17 crc kubenswrapper[4869]: I0202 14:51:17.997767 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d" (OuterVolumeSpecName: "kube-api-access-scf4d") pod "84f2e276-a4a3-4992-aadc-e6e4e259feea" (UID: "84f2e276-a4a3-4992-aadc-e6e4e259feea"). InnerVolumeSpecName "kube-api-access-scf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.095516 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.095566 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scf4d\" (UniqueName: \"kubernetes.io/projected/84f2e276-a4a3-4992-aadc-e6e4e259feea-kube-api-access-scf4d\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.095579 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f2e276-a4a3-4992-aadc-e6e4e259feea-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.550717 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.550711 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-hlvlp" event={"ID":"84f2e276-a4a3-4992-aadc-e6e4e259feea","Type":"ContainerDied","Data":"71163c26b3fc77f1df94a031810f7153e80509d8158c39baec69cfd192d2281a"} Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.553478 4869 generic.go:334] "Generic (PLEG): container finished" podID="4287f1a9-b523-48a9-a999-fc8f34b212a4" containerID="afb1cbeab983d6b4b46ae44495de0b332c18b10393223bd85665c1538577edab" exitCode=0 Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.553514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerDied","Data":"afb1cbeab983d6b4b46ae44495de0b332c18b10393223bd85665c1538577edab"} Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.626434 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:51:18 crc kubenswrapper[4869]: I0202 14:51:18.627270 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-hlvlp"] Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.215272 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sr5dv"] Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.279635 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:51:19 crc kubenswrapper[4869]: W0202 14:51:19.323048 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54b21918_ca4b_429c_8a6e_dd4bb0240efd.slice/crio-ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8 WatchSource:0}: Error finding container ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8: Status 404 returned error can't find the container with id ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8 Feb 02 14:51:19 crc kubenswrapper[4869]: W0202 14:51:19.326497 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b612893_5e70_472a_a65f_0d0c66f82de3.slice/crio-409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79 WatchSource:0}: Error finding container 409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79: Status 404 returned error can't find the container with id 409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79 Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.405431 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.479072 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f2e276-a4a3-4992-aadc-e6e4e259feea" path="/var/lib/kubelet/pods/84f2e276-a4a3-4992-aadc-e6e4e259feea/volumes" Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.483585 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b641090-1ff7-4058-9633-de20ec70c671" path="/var/lib/kubelet/pods/8b641090-1ff7-4058-9633-de20ec70c671/volumes" Feb 02 14:51:19 crc kubenswrapper[4869]: W0202 14:51:19.558059 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cf07564_1cdf_4897_be34_68c8d9ec7534.slice/crio-1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff WatchSource:0}: Error finding container 1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff: Status 404 returned error can't find the container with id 1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.568476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerStarted","Data":"ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8"} Feb 02 14:51:19 crc kubenswrapper[4869]: I0202 14:51:19.570326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sr5dv" event={"ID":"2b612893-5e70-472a-a65f-0d0c66f82de3","Type":"ContainerStarted","Data":"409947a8fbc3343c46a6c3250844294a0320637f3bc7d4482299456181ae9b79"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.580131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerStarted","Data":"1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.588129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1078d20a-9d7e-45ef-8be5-bade239489c4","Type":"ContainerStarted","Data":"8624dc0f6e5aef1937a45574b4039005c89f64cb76b90fd3084680864b7a8ca5"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.588788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.595706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4287f1a9-b523-48a9-a999-fc8f34b212a4","Type":"ContainerStarted","Data":"c4c71a1806a7cf6c12be9dc691b40d12aac113502b11ac27efe26b925b9ca279"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.606312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74" event={"ID":"d51425d7-d30c-466d-b478-17a637e3ef9f","Type":"ContainerStarted","Data":"31b2aa396592de0711b171e3fde6e94effe4a619e90cae985d7379ddab85267b"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.626403 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-f7z74" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.634184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"208fe19b-f03b-4a68-b6f2-f9dc3783239e","Type":"ContainerStarted","Data":"7172f4ff4f290db088a1c5719f2d94b3e2c65c93bba4fc500c4ca093e634bac4"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.646160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0db20771-eb71-4272-9814-ab5bf0fff1fe","Type":"ContainerStarted","Data":"e252241fcc57d3472614846ec2db93657f20d57c65957a0c1b70f834aff8f9aa"} Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.654149 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.508390013 podStartE2EDuration="33.654119842s" podCreationTimestamp="2026-02-02 14:50:47 +0000 UTC" firstStartedPulling="2026-02-02 14:51:12.140228097 +0000 UTC m=+1073.784864877" lastFinishedPulling="2026-02-02 14:51:18.285957936 +0000 UTC m=+1079.930594706" observedRunningTime="2026-02-02 14:51:20.61852072 +0000 UTC m=+1082.263157510" watchObservedRunningTime="2026-02-02 14:51:20.654119842 +0000 UTC m=+1082.298756612" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.677296 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.677266474 podStartE2EDuration="35.677266474s" podCreationTimestamp="2026-02-02 14:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:20.664792356 +0000 UTC m=+1082.309429136" watchObservedRunningTime="2026-02-02 14:51:20.677266474 +0000 UTC m=+1082.321903244" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.694339 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-f7z74" podStartSLOduration=22.61538358 podStartE2EDuration="28.694306907s" podCreationTimestamp="2026-02-02 14:50:52 +0000 UTC" firstStartedPulling="2026-02-02 14:51:12.655241316 +0000 UTC m=+1074.299878086" lastFinishedPulling="2026-02-02 14:51:18.734164643 +0000 UTC m=+1080.378801413" observedRunningTime="2026-02-02 14:51:20.689646521 +0000 UTC m=+1082.334283301" watchObservedRunningTime="2026-02-02 14:51:20.694306907 +0000 UTC m=+1082.338943697" Feb 02 14:51:20 crc kubenswrapper[4869]: I0202 14:51:20.718551 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.950922193 podStartE2EDuration="36.718528896s" podCreationTimestamp="2026-02-02 14:50:44 +0000 UTC" firstStartedPulling="2026-02-02 14:50:52.443269473 +0000 UTC m=+1054.087906243" lastFinishedPulling="2026-02-02 14:51:12.210876176 +0000 UTC m=+1073.855512946" observedRunningTime="2026-02-02 14:51:20.71543087 +0000 UTC m=+1082.360067630" watchObservedRunningTime="2026-02-02 14:51:20.718528896 +0000 UTC m=+1082.363165666" Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.657099 4869 generic.go:334] "Generic (PLEG): container finished" podID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerID="7819a6f12b4ee4b2e0e6548b9439122ce17a185d8262e570c2db8127e890e849" exitCode=0 Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.657209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerDied","Data":"7819a6f12b4ee4b2e0e6548b9439122ce17a185d8262e570c2db8127e890e849"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.661069 4869 generic.go:334] "Generic (PLEG): container finished" podID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerID="d4bc95d2879e70b645a2e7e235f1fbdcdf5fe19a1ef7176a88d572c086b1c57b" exitCode=0 Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.661154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerDied","Data":"d4bc95d2879e70b645a2e7e235f1fbdcdf5fe19a1ef7176a88d572c086b1c57b"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.663984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerStarted","Data":"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.666295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9a1c388-0473-4284-9a2c-09e3d97858f2","Type":"ContainerStarted","Data":"1537b682b197cf64754fc557947db1b13d8d218e2346b3868478942db4c7b9eb"} Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.668827 4869 generic.go:334] "Generic (PLEG): container finished" podID="79eb9544-e5e9-455c-94ca-bb36fa6eb873" containerID="a581fb6071039795143b024e23ba0276e0285d6df07b1b2559bd3e81a25e5819" exitCode=0 Feb 02 14:51:21 crc kubenswrapper[4869]: I0202 14:51:21.668892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerDied","Data":"a581fb6071039795143b024e23ba0276e0285d6df07b1b2559bd3e81a25e5819"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.689065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerStarted","Data":"63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.689976 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.691120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"208fe19b-f03b-4a68-b6f2-f9dc3783239e","Type":"ContainerStarted","Data":"2b6e8cf0074a3e1b10b9838ac29e513619e8774be1c3be6cc3a2358e37722d5b"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.693062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerStarted","Data":"b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.693163 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.695532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9a1c388-0473-4284-9a2c-09e3d97858f2","Type":"ContainerStarted","Data":"2aa03bb95ca126ad4f0aa8e30199b4e48a973bd950b896167ae7da8fb2b11935"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.697481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerStarted","Data":"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.697784 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.699354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sr5dv" event={"ID":"2b612893-5e70-472a-a65f-0d0c66f82de3","Type":"ContainerStarted","Data":"bf94e3195500303a722179095cae6bf7f79a08cad1f791832b07ed7d953faa63"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.701771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerStarted","Data":"40be43190fd4cc09839c1b1e0bfd2813fa6b14c34c62ec45073d527453d84427"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.701803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bd7dt" event={"ID":"79eb9544-e5e9-455c-94ca-bb36fa6eb873","Type":"ContainerStarted","Data":"25a4ea77a1c455d146e841e2467b8fad7f941ee565a4984a83bee500a38e7c08"} Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.702037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.702095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.711337 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.711392 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.722145 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" podStartSLOduration=7.119713408 podStartE2EDuration="7.722120881s" podCreationTimestamp="2026-02-02 14:51:16 +0000 UTC" firstStartedPulling="2026-02-02 14:51:19.575294985 +0000 UTC m=+1081.219931745" lastFinishedPulling="2026-02-02 14:51:20.177702448 +0000 UTC m=+1081.822339218" observedRunningTime="2026-02-02 14:51:23.717321943 +0000 UTC m=+1085.361958723" watchObservedRunningTime="2026-02-02 14:51:23.722120881 +0000 UTC m=+1085.366757651" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.741956 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.583913161 podStartE2EDuration="28.741935192s" podCreationTimestamp="2026-02-02 14:50:55 +0000 UTC" firstStartedPulling="2026-02-02 14:51:13.287481937 +0000 UTC m=+1074.932118707" lastFinishedPulling="2026-02-02 14:51:22.445503968 +0000 UTC m=+1084.090140738" observedRunningTime="2026-02-02 14:51:23.736146518 +0000 UTC m=+1085.380783298" watchObservedRunningTime="2026-02-02 14:51:23.741935192 +0000 UTC m=+1085.386571972" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.764633 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.769473 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bd7dt" podStartSLOduration=27.472903761 podStartE2EDuration="31.769448713s" podCreationTimestamp="2026-02-02 14:50:52 +0000 UTC" firstStartedPulling="2026-02-02 14:51:15.142859639 +0000 UTC m=+1076.787496409" lastFinishedPulling="2026-02-02 14:51:19.439404601 +0000 UTC m=+1081.084041361" observedRunningTime="2026-02-02 14:51:23.76166355 +0000 UTC m=+1085.406300320" watchObservedRunningTime="2026-02-02 14:51:23.769448713 +0000 UTC m=+1085.414085493" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.785073 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=26.507294874 podStartE2EDuration="35.785052879s" podCreationTimestamp="2026-02-02 14:50:48 +0000 UTC" firstStartedPulling="2026-02-02 14:51:13.128952383 +0000 UTC m=+1074.773589153" lastFinishedPulling="2026-02-02 14:51:22.406710368 +0000 UTC m=+1084.051347158" observedRunningTime="2026-02-02 14:51:23.78020551 +0000 UTC m=+1085.424842290" watchObservedRunningTime="2026-02-02 14:51:23.785052879 +0000 UTC m=+1085.429689659" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.826294 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.552339709 podStartE2EDuration="31.826275339s" podCreationTimestamp="2026-02-02 14:50:52 +0000 UTC" firstStartedPulling="2026-02-02 14:51:15.142610492 +0000 UTC m=+1076.787247262" lastFinishedPulling="2026-02-02 14:51:22.416546102 +0000 UTC m=+1084.061182892" observedRunningTime="2026-02-02 14:51:23.811963545 +0000 UTC m=+1085.456600325" watchObservedRunningTime="2026-02-02 14:51:23.826275339 +0000 UTC m=+1085.470912109" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.834717 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sr5dv" podStartSLOduration=5.77847641 podStartE2EDuration="8.834700048s" podCreationTimestamp="2026-02-02 14:51:15 +0000 UTC" firstStartedPulling="2026-02-02 14:51:19.350434839 +0000 UTC m=+1080.995071609" lastFinishedPulling="2026-02-02 14:51:22.406658477 +0000 UTC m=+1084.051295247" observedRunningTime="2026-02-02 14:51:23.829746486 +0000 UTC m=+1085.474383256" watchObservedRunningTime="2026-02-02 14:51:23.834700048 +0000 UTC m=+1085.479336818" Feb 02 14:51:23 crc kubenswrapper[4869]: I0202 14:51:23.867710 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-4c4vl" podStartSLOduration=7.017610681 podStartE2EDuration="7.867683475s" podCreationTimestamp="2026-02-02 14:51:16 +0000 UTC" firstStartedPulling="2026-02-02 14:51:19.325791348 +0000 UTC m=+1080.970428118" lastFinishedPulling="2026-02-02 14:51:20.175864152 +0000 UTC m=+1081.820500912" observedRunningTime="2026-02-02 14:51:23.850804947 +0000 UTC m=+1085.495441737" watchObservedRunningTime="2026-02-02 14:51:23.867683475 +0000 UTC m=+1085.512320245" Feb 02 14:51:25 crc kubenswrapper[4869]: I0202 14:51:25.775388 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.079862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.079948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.219705 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.461819 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.461996 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.502634 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.786998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.914706 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.971449 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.973539 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.978449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.982540 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.984290 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-shdlb" Feb 02 14:51:26 crc kubenswrapper[4869]: I0202 14:51:26.997435 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.008015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080565 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ssl\" (UniqueName: \"kubernetes.io/projected/f502e55d-56a7-4238-b2cc-46a4c2eb3945-kube-api-access-82ssl\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080772 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.080815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.081019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-config\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.081086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-scripts\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ssl\" (UniqueName: \"kubernetes.io/projected/f502e55d-56a7-4238-b2cc-46a4c2eb3945-kube-api-access-82ssl\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.183987 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-config\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-scripts\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184122 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184162 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.184932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.185331 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-scripts\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.185524 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f502e55d-56a7-4238-b2cc-46a4c2eb3945-config\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.193646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.203231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.207634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f502e55d-56a7-4238-b2cc-46a4c2eb3945-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.210854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ssl\" (UniqueName: \"kubernetes.io/projected/f502e55d-56a7-4238-b2cc-46a4c2eb3945-kube-api-access-82ssl\") pod \"ovn-northd-0\" (UID: \"f502e55d-56a7-4238-b2cc-46a4c2eb3945\") " pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.299515 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.299581 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.301073 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.414203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.424358 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.425630 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.441740 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.443611 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.448554 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.469079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.489960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.493259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.503630 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.602476 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.603175 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.603235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.603293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.605961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.622251 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.646504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"placement-db-create-hqz6l\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.650893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"placement-de8f-account-create-update-7gxr8\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.738793 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.740323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.752036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.763735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.777683 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.825494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.825572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.839251 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.844604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.848157 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.858821 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.927371 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.927438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.929364 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.938900 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 14:51:27 crc kubenswrapper[4869]: I0202 14:51:27.955242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"glance-db-create-6nfjx\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.029521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.029572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.045197 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.068343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.132022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.132074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.133285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.163657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"glance-775d-account-create-update-mc2f8\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.170423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.422394 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.697733 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 14:51:28 crc kubenswrapper[4869]: W0202 14:51:28.700358 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57ed4541_0cbb_4412_b054_fe72923fc2ba.slice/crio-768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65 WatchSource:0}: Error finding container 768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65: Status 404 returned error can't find the container with id 768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65 Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.761643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f502e55d-56a7-4238-b2cc-46a4c2eb3945","Type":"ContainerStarted","Data":"10831d4bcc622b0b7eb940eb7a1486f3ca8b2ca5db0102460ed44c44902a850d"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.770011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerStarted","Data":"df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.770056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerStarted","Data":"fe14be75a1800d62e9b67cddf1c8c2e5476e5e2b193631d4ce38d708f24a91ca"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.779865 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-de8f-account-create-update-7gxr8" event={"ID":"57ed4541-0cbb-4412-b054-fe72923fc2ba","Type":"ContainerStarted","Data":"768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65"} Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.788434 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hqz6l" podStartSLOduration=1.788413399 podStartE2EDuration="1.788413399s" podCreationTimestamp="2026-02-02 14:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:28.786942613 +0000 UTC m=+1090.431579383" watchObservedRunningTime="2026-02-02 14:51:28.788413399 +0000 UTC m=+1090.433050169" Feb 02 14:51:28 crc kubenswrapper[4869]: I0202 14:51:28.916790 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.040590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 14:51:29 crc kubenswrapper[4869]: W0202 14:51:29.048304 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod667b6a5a_a090_407f_a4c1_229be7db4fbc.slice/crio-50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def WatchSource:0}: Error finding container 50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def: Status 404 returned error can't find the container with id 50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.245099 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.793108 4869 generic.go:334] "Generic (PLEG): container finished" podID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerID="df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f" exitCode=0 Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.793210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerDied","Data":"df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.794613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerStarted","Data":"50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.797314 4869 generic.go:334] "Generic (PLEG): container finished" podID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerID="d6f5aeb4cb8e140e0ec76f751f66f1f3334b226154def23e06d3735565e7a00e" exitCode=0 Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.797481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6nfjx" event={"ID":"fc85b87e-a9f7-4407-8f88-59b46f424fe5","Type":"ContainerDied","Data":"d6f5aeb4cb8e140e0ec76f751f66f1f3334b226154def23e06d3735565e7a00e"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.797516 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6nfjx" event={"ID":"fc85b87e-a9f7-4407-8f88-59b46f424fe5","Type":"ContainerStarted","Data":"88ab34f5cb79551510be237f75a59a62a97ace89c907b1652139d4ddbf0f2615"} Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.799850 4869 generic.go:334] "Generic (PLEG): container finished" podID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerID="78a897732627685686d46c9cdceda0daa9d9401b96294c575ac6408193fb1e9d" exitCode=0 Feb 02 14:51:29 crc kubenswrapper[4869]: I0202 14:51:29.799931 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-de8f-account-create-update-7gxr8" event={"ID":"57ed4541-0cbb-4412-b054-fe72923fc2ba","Type":"ContainerDied","Data":"78a897732627685686d46c9cdceda0daa9d9401b96294c575ac6408193fb1e9d"} Feb 02 14:51:30 crc kubenswrapper[4869]: I0202 14:51:30.811976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f502e55d-56a7-4238-b2cc-46a4c2eb3945","Type":"ContainerStarted","Data":"c2898c29c7ac00e9470327dfac98457f4ec58d0bc1ca81d493d5f1b2e5424cb4"} Feb 02 14:51:30 crc kubenswrapper[4869]: I0202 14:51:30.814503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerStarted","Data":"6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.226978 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.322950 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") pod \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.323127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") pod \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\" (UID: \"fc85b87e-a9f7-4407-8f88-59b46f424fe5\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.324629 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc85b87e-a9f7-4407-8f88-59b46f424fe5" (UID: "fc85b87e-a9f7-4407-8f88-59b46f424fe5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.335346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz" (OuterVolumeSpecName: "kube-api-access-88gjz") pod "fc85b87e-a9f7-4407-8f88-59b46f424fe5" (UID: "fc85b87e-a9f7-4407-8f88-59b46f424fe5"). InnerVolumeSpecName "kube-api-access-88gjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.396979 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.404598 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.426173 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc85b87e-a9f7-4407-8f88-59b46f424fe5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.426233 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88gjz\" (UniqueName: \"kubernetes.io/projected/fc85b87e-a9f7-4407-8f88-59b46f424fe5-kube-api-access-88gjz\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.427189 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.527648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") pod \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.527882 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") pod \"57ed4541-0cbb-4412-b054-fe72923fc2ba\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.528018 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") pod \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\" (UID: \"2cae9d7b-b1d0-4745-801d-14b5f1e5f959\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.528141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") pod \"57ed4541-0cbb-4412-b054-fe72923fc2ba\" (UID: \"57ed4541-0cbb-4412-b054-fe72923fc2ba\") " Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.528811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cae9d7b-b1d0-4745-801d-14b5f1e5f959" (UID: "2cae9d7b-b1d0-4745-801d-14b5f1e5f959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.530153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57ed4541-0cbb-4412-b054-fe72923fc2ba" (UID: "57ed4541-0cbb-4412-b054-fe72923fc2ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.544351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6" (OuterVolumeSpecName: "kube-api-access-7n7j6") pod "2cae9d7b-b1d0-4745-801d-14b5f1e5f959" (UID: "2cae9d7b-b1d0-4745-801d-14b5f1e5f959"). InnerVolumeSpecName "kube-api-access-7n7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.549751 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v" (OuterVolumeSpecName: "kube-api-access-4rv6v") pod "57ed4541-0cbb-4412-b054-fe72923fc2ba" (UID: "57ed4541-0cbb-4412-b054-fe72923fc2ba"). InnerVolumeSpecName "kube-api-access-4rv6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631187 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rv6v\" (UniqueName: \"kubernetes.io/projected/57ed4541-0cbb-4412-b054-fe72923fc2ba-kube-api-access-4rv6v\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631246 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631265 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57ed4541-0cbb-4412-b054-fe72923fc2ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.631278 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7j6\" (UniqueName: \"kubernetes.io/projected/2cae9d7b-b1d0-4745-801d-14b5f1e5f959-kube-api-access-7n7j6\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.709134 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.799422 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.825927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqz6l" event={"ID":"2cae9d7b-b1d0-4745-801d-14b5f1e5f959","Type":"ContainerDied","Data":"fe14be75a1800d62e9b67cddf1c8c2e5476e5e2b193631d4ce38d708f24a91ca"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.825981 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe14be75a1800d62e9b67cddf1c8c2e5476e5e2b193631d4ce38d708f24a91ca" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.826095 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqz6l" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.829212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-6nfjx" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.829619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-6nfjx" event={"ID":"fc85b87e-a9f7-4407-8f88-59b46f424fe5","Type":"ContainerDied","Data":"88ab34f5cb79551510be237f75a59a62a97ace89c907b1652139d4ddbf0f2615"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.829658 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88ab34f5cb79551510be237f75a59a62a97ace89c907b1652139d4ddbf0f2615" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.835851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-de8f-account-create-update-7gxr8" event={"ID":"57ed4541-0cbb-4412-b054-fe72923fc2ba","Type":"ContainerDied","Data":"768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65"} Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.835939 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768d5cb28289f227c9d3e50480dab42f089624f5fd05e0f8a22167ae4e46ec65" Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.836467 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" containerID="cri-o://63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d" gracePeriod=10 Feb 02 14:51:31 crc kubenswrapper[4869]: I0202 14:51:31.836705 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-de8f-account-create-update-7gxr8" Feb 02 14:51:32 crc kubenswrapper[4869]: I0202 14:51:32.846559 4869 generic.go:334] "Generic (PLEG): container finished" podID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerID="63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d" exitCode=0 Feb 02 14:51:32 crc kubenswrapper[4869]: I0202 14:51:32.846660 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerDied","Data":"63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d"} Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.594979 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:34 crc kubenswrapper[4869]: E0202 14:51:34.595862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerName="mariadb-account-create-update" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.595879 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerName="mariadb-account-create-update" Feb 02 14:51:34 crc kubenswrapper[4869]: E0202 14:51:34.595896 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.597773 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: E0202 14:51:34.597891 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.597904 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.598267 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.598288 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" containerName="mariadb-database-create" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.598313 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" containerName="mariadb-account-create-update" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.599072 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.602882 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.620067 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.693007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.693270 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.796345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.795216 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.796522 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.824610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"root-account-create-update-rw49p\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:34 crc kubenswrapper[4869]: I0202 14:51:34.925283 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.386106 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.870820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerStarted","Data":"c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e"} Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.871320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerStarted","Data":"2aa604e3dfd2060c4fc58fbd9ba211d90108d9d1fb97d4ced519b6388e7d6bc1"} Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.872422 4869 generic.go:334] "Generic (PLEG): container finished" podID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerID="6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176" exitCode=0 Feb 02 14:51:35 crc kubenswrapper[4869]: I0202 14:51:35.872447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerDied","Data":"6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.398127 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571480 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.571562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") pod \"2cf07564-1cdf-4897-be34-68c8d9ec7534\" (UID: \"2cf07564-1cdf-4897-be34-68c8d9ec7534\") " Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.578421 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv" (OuterVolumeSpecName: "kube-api-access-pffdv") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "kube-api-access-pffdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.616786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config" (OuterVolumeSpecName: "config") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.622580 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.624560 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2cf07564-1cdf-4897-be34-68c8d9ec7534" (UID: "2cf07564-1cdf-4897-be34-68c8d9ec7534"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.675962 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.676248 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pffdv\" (UniqueName: \"kubernetes.io/projected/2cf07564-1cdf-4897-be34-68c8d9ec7534-kube-api-access-pffdv\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.676342 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.676429 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2cf07564-1cdf-4897-be34-68c8d9ec7534-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.883633 4869 generic.go:334] "Generic (PLEG): container finished" podID="6b49613f-eb42-441c-a98e-651ac383358e" containerID="c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e" exitCode=0 Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.883726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerDied","Data":"c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.887098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" event={"ID":"2cf07564-1cdf-4897-be34-68c8d9ec7534","Type":"ContainerDied","Data":"1b63d87640dcc4282fece22b35edaae93b0361d36791dae4830d5545dc5841ff"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.887137 4869 scope.go:117] "RemoveContainer" containerID="63ba17de8d348aae8fa8daf83de0caecadc26475e604356c46fa2a462a18548d" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.887141 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-frtgm" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.891317 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f502e55d-56a7-4238-b2cc-46a4c2eb3945","Type":"ContainerStarted","Data":"c87574b3c52a146aab94e0f857bb893569a9afb1c8ab1319d43693e7c4a95500"} Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.891360 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.917061 4869 scope.go:117] "RemoveContainer" containerID="7819a6f12b4ee4b2e0e6548b9439122ce17a185d8262e570c2db8127e890e849" Feb 02 14:51:36 crc kubenswrapper[4869]: I0202 14:51:36.963868 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.460689572 podStartE2EDuration="10.963837443s" podCreationTimestamp="2026-02-02 14:51:26 +0000 UTC" firstStartedPulling="2026-02-02 14:51:28.061394682 +0000 UTC m=+1089.706031462" lastFinishedPulling="2026-02-02 14:51:29.564542563 +0000 UTC m=+1091.209179333" observedRunningTime="2026-02-02 14:51:36.931100113 +0000 UTC m=+1098.575736893" watchObservedRunningTime="2026-02-02 14:51:36.963837443 +0000 UTC m=+1098.608474213" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.003182 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.011360 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-frtgm"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.015758 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 14:51:37 crc kubenswrapper[4869]: E0202 14:51:37.016482 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="init" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.016507 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="init" Feb 02 14:51:37 crc kubenswrapper[4869]: E0202 14:51:37.016535 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.016548 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.016745 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" containerName="dnsmasq-dns" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.017750 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.022405 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.079850 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.081382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.083845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.083989 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.084171 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.093525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.185793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.186300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.186352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.186480 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.187159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.213691 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"keystone-db-create-wqbqn\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.287539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.287711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.288719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.307796 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"keystone-66c2-account-create-update-m2vvf\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.332584 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.332695 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.401652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.475415 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf07564-1cdf-4897-be34-68c8d9ec7534" path="/var/lib/kubelet/pods/2cf07564-1cdf-4897-be34-68c8d9ec7534/volumes" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.492481 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") pod \"667b6a5a-a090-407f-a4c1-229be7db4fbc\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.492815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") pod \"667b6a5a-a090-407f-a4c1-229be7db4fbc\" (UID: \"667b6a5a-a090-407f-a4c1-229be7db4fbc\") " Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.493564 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "667b6a5a-a090-407f-a4c1-229be7db4fbc" (UID: "667b6a5a-a090-407f-a4c1-229be7db4fbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.502946 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp" (OuterVolumeSpecName: "kube-api-access-gfplp") pod "667b6a5a-a090-407f-a4c1-229be7db4fbc" (UID: "667b6a5a-a090-407f-a4c1-229be7db4fbc"). InnerVolumeSpecName "kube-api-access-gfplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.594605 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/667b6a5a-a090-407f-a4c1-229be7db4fbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.595067 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfplp\" (UniqueName: \"kubernetes.io/projected/667b6a5a-a090-407f-a4c1-229be7db4fbc-kube-api-access-gfplp\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.797381 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 14:51:37 crc kubenswrapper[4869]: W0202 14:51:37.800779 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod663a2e70_1d18_41b3_bc31_7e8b44f00450.slice/crio-8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105 WatchSource:0}: Error finding container 8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105: Status 404 returned error can't find the container with id 8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105 Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.911805 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-775d-account-create-update-mc2f8" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.911820 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-775d-account-create-update-mc2f8" event={"ID":"667b6a5a-a090-407f-a4c1-229be7db4fbc","Type":"ContainerDied","Data":"50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def"} Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.911859 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50778c1b33f90e1accc6c04b9baa2a1f750e9c6ccb015034e37861fa21136def" Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.916401 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 14:51:37 crc kubenswrapper[4869]: I0202 14:51:37.920054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqbqn" event={"ID":"663a2e70-1d18-41b3-bc31-7e8b44f00450","Type":"ContainerStarted","Data":"8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.273562 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.312418 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") pod \"6b49613f-eb42-441c-a98e-651ac383358e\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.312518 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") pod \"6b49613f-eb42-441c-a98e-651ac383358e\" (UID: \"6b49613f-eb42-441c-a98e-651ac383358e\") " Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.313968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b49613f-eb42-441c-a98e-651ac383358e" (UID: "6b49613f-eb42-441c-a98e-651ac383358e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.321265 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2" (OuterVolumeSpecName: "kube-api-access-cfdv2") pod "6b49613f-eb42-441c-a98e-651ac383358e" (UID: "6b49613f-eb42-441c-a98e-651ac383358e"). InnerVolumeSpecName "kube-api-access-cfdv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.414653 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b49613f-eb42-441c-a98e-651ac383358e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.414701 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfdv2\" (UniqueName: \"kubernetes.io/projected/6b49613f-eb42-441c-a98e-651ac383358e-kube-api-access-cfdv2\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.947676 4869 generic.go:334] "Generic (PLEG): container finished" podID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerID="6d8d94685f54694bdd3d654fd30340b20f11060d58afcb8b6db65cc019ab404b" exitCode=0 Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.948123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqbqn" event={"ID":"663a2e70-1d18-41b3-bc31-7e8b44f00450","Type":"ContainerDied","Data":"6d8d94685f54694bdd3d654fd30340b20f11060d58afcb8b6db65cc019ab404b"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.964284 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rw49p" event={"ID":"6b49613f-eb42-441c-a98e-651ac383358e","Type":"ContainerDied","Data":"2aa604e3dfd2060c4fc58fbd9ba211d90108d9d1fb97d4ced519b6388e7d6bc1"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.964354 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa604e3dfd2060c4fc58fbd9ba211d90108d9d1fb97d4ced519b6388e7d6bc1" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.964458 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rw49p" Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.997180 4869 generic.go:334] "Generic (PLEG): container finished" podID="695a8791-53fd-414d-af01-753483223d32" containerID="9b15642290472abfbc4ace64421c6af055e5988041270bd6769c924998672a78" exitCode=0 Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.997240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c2-account-create-update-m2vvf" event={"ID":"695a8791-53fd-414d-af01-753483223d32","Type":"ContainerDied","Data":"9b15642290472abfbc4ace64421c6af055e5988041270bd6769c924998672a78"} Feb 02 14:51:38 crc kubenswrapper[4869]: I0202 14:51:38.997268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c2-account-create-update-m2vvf" event={"ID":"695a8791-53fd-414d-af01-753483223d32","Type":"ContainerStarted","Data":"d4f078817dc98e4b14dcc6bdd60ef30263955ff02a7ab4a8c067ddb673feb707"} Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.434073 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.442844 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552628 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") pod \"663a2e70-1d18-41b3-bc31-7e8b44f00450\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") pod \"695a8791-53fd-414d-af01-753483223d32\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552792 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") pod \"695a8791-53fd-414d-af01-753483223d32\" (UID: \"695a8791-53fd-414d-af01-753483223d32\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.552885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") pod \"663a2e70-1d18-41b3-bc31-7e8b44f00450\" (UID: \"663a2e70-1d18-41b3-bc31-7e8b44f00450\") " Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.555214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "663a2e70-1d18-41b3-bc31-7e8b44f00450" (UID: "663a2e70-1d18-41b3-bc31-7e8b44f00450"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.557465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "695a8791-53fd-414d-af01-753483223d32" (UID: "695a8791-53fd-414d-af01-753483223d32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.563818 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd" (OuterVolumeSpecName: "kube-api-access-8f4bd") pod "695a8791-53fd-414d-af01-753483223d32" (UID: "695a8791-53fd-414d-af01-753483223d32"). InnerVolumeSpecName "kube-api-access-8f4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.576441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp" (OuterVolumeSpecName: "kube-api-access-ch6kp") pod "663a2e70-1d18-41b3-bc31-7e8b44f00450" (UID: "663a2e70-1d18-41b3-bc31-7e8b44f00450"). InnerVolumeSpecName "kube-api-access-ch6kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.658635 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch6kp\" (UniqueName: \"kubernetes.io/projected/663a2e70-1d18-41b3-bc31-7e8b44f00450-kube-api-access-ch6kp\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.659594 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f4bd\" (UniqueName: \"kubernetes.io/projected/695a8791-53fd-414d-af01-753483223d32-kube-api-access-8f4bd\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.659695 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/695a8791-53fd-414d-af01-753483223d32-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.659813 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663a2e70-1d18-41b3-bc31-7e8b44f00450-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.796657 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:40 crc kubenswrapper[4869]: I0202 14:51:40.803287 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rw49p"] Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.014612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66c2-account-create-update-m2vvf" event={"ID":"695a8791-53fd-414d-af01-753483223d32","Type":"ContainerDied","Data":"d4f078817dc98e4b14dcc6bdd60ef30263955ff02a7ab4a8c067ddb673feb707"} Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.014656 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4f078817dc98e4b14dcc6bdd60ef30263955ff02a7ab4a8c067ddb673feb707" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.014655 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66c2-account-create-update-m2vvf" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.017282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wqbqn" event={"ID":"663a2e70-1d18-41b3-bc31-7e8b44f00450","Type":"ContainerDied","Data":"8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105"} Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.017329 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2f6dc2f3884b80ff8e640ad0b7f987136341b5a1e79265c7dd9fad4e003105" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.017382 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wqbqn" Feb 02 14:51:41 crc kubenswrapper[4869]: I0202 14:51:41.480235 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b49613f-eb42-441c-a98e-651ac383358e" path="/var/lib/kubelet/pods/6b49613f-eb42-441c-a98e-651ac383358e/volumes" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.011813 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695a8791-53fd-414d-af01-753483223d32" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012480 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="695a8791-53fd-414d-af01-753483223d32" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012553 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerName="mariadb-database-create" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012559 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerName="mariadb-database-create" Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012568 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b49613f-eb42-441c-a98e-651ac383358e" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012575 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b49613f-eb42-441c-a98e-651ac383358e" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: E0202 14:51:43.012667 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.012674 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013076 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" containerName="mariadb-database-create" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013096 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b49613f-eb42-441c-a98e-651ac383358e" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013108 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013118 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="695a8791-53fd-414d-af01-753483223d32" containerName="mariadb-account-create-update" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.013867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.016779 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q8bdk" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.017594 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.030348 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107557 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107665 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.107725 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.209413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.217159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.217185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.218189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.231021 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"glance-db-sync-nmqdp\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.344024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:51:43 crc kubenswrapper[4869]: I0202 14:51:43.903239 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 14:51:44 crc kubenswrapper[4869]: I0202 14:51:44.054783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerStarted","Data":"99b5ca7935cfbc4a1d283bd53d5a36a9759bf57b988d18b5c8f5c459c5a63c51"} Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.844183 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.845715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.853085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.871293 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.980399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:45 crc kubenswrapper[4869]: I0202 14:51:45.980493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.077229 4869 generic.go:334] "Generic (PLEG): container finished" podID="95035071-a194-40ba-9b64-700ae3121dc4" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" exitCode=0 Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.077299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerDied","Data":"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93"} Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.082458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.082519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.083398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.119199 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"root-account-create-update-qx9sp\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.166683 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:46 crc kubenswrapper[4869]: I0202 14:51:46.693436 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.089105 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerStarted","Data":"1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d"} Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.089179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerStarted","Data":"2266dabb7f1c8e39cd8c38e3bb443e87550af12cc90d1334e4f69e4a7048fa16"} Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.092429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerStarted","Data":"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa"} Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.092714 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.110857 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qx9sp" podStartSLOduration=2.110837486 podStartE2EDuration="2.110837486s" podCreationTimestamp="2026-02-02 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:47.10414969 +0000 UTC m=+1108.748786460" watchObservedRunningTime="2026-02-02 14:51:47.110837486 +0000 UTC m=+1108.755474256" Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.130019 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.981233849 podStartE2EDuration="1m4.129983999s" podCreationTimestamp="2026-02-02 14:50:43 +0000 UTC" firstStartedPulling="2026-02-02 14:50:45.990293187 +0000 UTC m=+1047.634929957" lastFinishedPulling="2026-02-02 14:51:12.139043337 +0000 UTC m=+1073.783680107" observedRunningTime="2026-02-02 14:51:47.127739104 +0000 UTC m=+1108.772375874" watchObservedRunningTime="2026-02-02 14:51:47.129983999 +0000 UTC m=+1108.774620769" Feb 02 14:51:47 crc kubenswrapper[4869]: I0202 14:51:47.368584 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 14:51:48 crc kubenswrapper[4869]: I0202 14:51:48.103135 4869 generic.go:334] "Generic (PLEG): container finished" podID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerID="1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d" exitCode=0 Feb 02 14:51:48 crc kubenswrapper[4869]: I0202 14:51:48.103281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerDied","Data":"1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d"} Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.487137 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.560099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") pod \"cedd0523-58d4-494f-9d04-76029ad9ca4d\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.560316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") pod \"cedd0523-58d4-494f-9d04-76029ad9ca4d\" (UID: \"cedd0523-58d4-494f-9d04-76029ad9ca4d\") " Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.561043 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cedd0523-58d4-494f-9d04-76029ad9ca4d" (UID: "cedd0523-58d4-494f-9d04-76029ad9ca4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.567953 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x" (OuterVolumeSpecName: "kube-api-access-kwq8x") pod "cedd0523-58d4-494f-9d04-76029ad9ca4d" (UID: "cedd0523-58d4-494f-9d04-76029ad9ca4d"). InnerVolumeSpecName "kube-api-access-kwq8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.663424 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwq8x\" (UniqueName: \"kubernetes.io/projected/cedd0523-58d4-494f-9d04-76029ad9ca4d-kube-api-access-kwq8x\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:49 crc kubenswrapper[4869]: I0202 14:51:49.663774 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cedd0523-58d4-494f-9d04-76029ad9ca4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:50 crc kubenswrapper[4869]: I0202 14:51:50.129363 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qx9sp" event={"ID":"cedd0523-58d4-494f-9d04-76029ad9ca4d","Type":"ContainerDied","Data":"2266dabb7f1c8e39cd8c38e3bb443e87550af12cc90d1334e4f69e4a7048fa16"} Feb 02 14:51:50 crc kubenswrapper[4869]: I0202 14:51:50.129786 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2266dabb7f1c8e39cd8c38e3bb443e87550af12cc90d1334e4f69e4a7048fa16" Feb 02 14:51:50 crc kubenswrapper[4869]: I0202 14:51:50.129402 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qx9sp" Feb 02 14:51:52 crc kubenswrapper[4869]: I0202 14:51:52.891369 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-f7z74" podUID="d51425d7-d30c-466d-b478-17a637e3ef9f" containerName="ovn-controller" probeResult="failure" output=< Feb 02 14:51:52 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 14:51:52 crc kubenswrapper[4869]: > Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.010161 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.054889 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bd7dt" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.288097 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:51:53 crc kubenswrapper[4869]: E0202 14:51:53.288479 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerName="mariadb-account-create-update" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.288500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerName="mariadb-account-create-update" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.288656 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" containerName="mariadb-account-create-update" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.289378 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.292359 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.311757 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333787 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.333844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.436463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.436946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.437394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.439211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.439244 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.440048 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.441356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.462185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"ovn-controller-f7z74-config-lzp54\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:53 crc kubenswrapper[4869]: I0202 14:51:53.620555 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:54 crc kubenswrapper[4869]: I0202 14:51:54.169723 4869 generic.go:334] "Generic (PLEG): container finished" podID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" exitCode=0 Feb 02 14:51:54 crc kubenswrapper[4869]: I0202 14:51:54.169804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerDied","Data":"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7"} Feb 02 14:51:57 crc kubenswrapper[4869]: I0202 14:51:57.416923 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:51:57 crc kubenswrapper[4869]: I0202 14:51:57.908668 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-f7z74" Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.208455 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerID="7ceee7ca0afb25fecb47c7d1ea7c643849b3e2a4371bef94fa2e91ed301777b9" exitCode=0 Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.208554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74-config-lzp54" event={"ID":"d1cce5e8-8297-4595-9c62-8d593ed35b0f","Type":"ContainerDied","Data":"7ceee7ca0afb25fecb47c7d1ea7c643849b3e2a4371bef94fa2e91ed301777b9"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.208592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74-config-lzp54" event={"ID":"d1cce5e8-8297-4595-9c62-8d593ed35b0f","Type":"ContainerStarted","Data":"ecf8e6a6d474b5e7476f29ad4ae29e234e11668280caa810ad6939e8040c4054"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.210625 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerStarted","Data":"787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.215825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerStarted","Data":"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1"} Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.216713 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.274017 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nmqdp" podStartSLOduration=3.199846367 podStartE2EDuration="16.273983463s" podCreationTimestamp="2026-02-02 14:51:42 +0000 UTC" firstStartedPulling="2026-02-02 14:51:43.912329515 +0000 UTC m=+1105.556966285" lastFinishedPulling="2026-02-02 14:51:56.986466611 +0000 UTC m=+1118.631103381" observedRunningTime="2026-02-02 14:51:58.267668627 +0000 UTC m=+1119.912305417" watchObservedRunningTime="2026-02-02 14:51:58.273983463 +0000 UTC m=+1119.918620243" Feb 02 14:51:58 crc kubenswrapper[4869]: I0202 14:51:58.302558 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371961.552246 podStartE2EDuration="1m15.30252928s" podCreationTimestamp="2026-02-02 14:50:43 +0000 UTC" firstStartedPulling="2026-02-02 14:50:45.572792672 +0000 UTC m=+1047.217429442" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:51:58.296520542 +0000 UTC m=+1119.941157332" watchObservedRunningTime="2026-02-02 14:51:58.30252928 +0000 UTC m=+1119.947166050" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.594623 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669539 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669689 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669713 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.669802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") pod \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\" (UID: \"d1cce5e8-8297-4595-9c62-8d593ed35b0f\") " Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run" (OuterVolumeSpecName: "var-run") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670245 4869 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670228 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.670854 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.671478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts" (OuterVolumeSpecName: "scripts") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.676433 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg" (OuterVolumeSpecName: "kube-api-access-s77dg") pod "d1cce5e8-8297-4595-9c62-8d593ed35b0f" (UID: "d1cce5e8-8297-4595-9c62-8d593ed35b0f"). InnerVolumeSpecName "kube-api-access-s77dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.771880 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772232 4869 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d1cce5e8-8297-4595-9c62-8d593ed35b0f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772244 4869 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772255 4869 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d1cce5e8-8297-4595-9c62-8d593ed35b0f-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 14:51:59 crc kubenswrapper[4869]: I0202 14:51:59.772263 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s77dg\" (UniqueName: \"kubernetes.io/projected/d1cce5e8-8297-4595-9c62-8d593ed35b0f-kube-api-access-s77dg\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.235835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-f7z74-config-lzp54" event={"ID":"d1cce5e8-8297-4595-9c62-8d593ed35b0f","Type":"ContainerDied","Data":"ecf8e6a6d474b5e7476f29ad4ae29e234e11668280caa810ad6939e8040c4054"} Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.235933 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecf8e6a6d474b5e7476f29ad4ae29e234e11668280caa810ad6939e8040c4054" Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.236018 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-f7z74-config-lzp54" Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.713124 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:52:00 crc kubenswrapper[4869]: I0202 14:52:00.719330 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-f7z74-config-lzp54"] Feb 02 14:52:01 crc kubenswrapper[4869]: I0202 14:52:01.476335 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" path="/var/lib/kubelet/pods/d1cce5e8-8297-4595-9c62-8d593ed35b0f/volumes" Feb 02 14:52:05 crc kubenswrapper[4869]: I0202 14:52:05.217231 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:52:05 crc kubenswrapper[4869]: I0202 14:52:05.282374 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerID="787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec" exitCode=0 Feb 02 14:52:05 crc kubenswrapper[4869]: I0202 14:52:05.282441 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerDied","Data":"787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec"} Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.779383 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910676 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.910701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") pod \"8d01d875-1fd0-4d36-9077-337e2549b17c\" (UID: \"8d01d875-1fd0-4d36-9077-337e2549b17c\") " Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.918089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.924131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7" (OuterVolumeSpecName: "kube-api-access-959n7") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "kube-api-access-959n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.938700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:06 crc kubenswrapper[4869]: I0202 14:52:06.959670 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data" (OuterVolumeSpecName: "config-data") pod "8d01d875-1fd0-4d36-9077-337e2549b17c" (UID: "8d01d875-1fd0-4d36-9077-337e2549b17c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014165 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-959n7\" (UniqueName: \"kubernetes.io/projected/8d01d875-1fd0-4d36-9077-337e2549b17c-kube-api-access-959n7\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014228 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014248 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.014265 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d01d875-1fd0-4d36-9077-337e2549b17c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.305368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nmqdp" event={"ID":"8d01d875-1fd0-4d36-9077-337e2549b17c","Type":"ContainerDied","Data":"99b5ca7935cfbc4a1d283bd53d5a36a9759bf57b988d18b5c8f5c459c5a63c51"} Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.305723 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b5ca7935cfbc4a1d283bd53d5a36a9759bf57b988d18b5c8f5c459c5a63c51" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.305927 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nmqdp" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.741252 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:07 crc kubenswrapper[4869]: E0202 14:52:07.741732 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerName="glance-db-sync" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.741754 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerName="glance-db-sync" Feb 02 14:52:07 crc kubenswrapper[4869]: E0202 14:52:07.741792 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerName="ovn-config" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.741801 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerName="ovn-config" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.742012 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" containerName="glance-db-sync" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.742040 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1cce5e8-8297-4595-9c62-8d593ed35b0f" containerName="ovn-config" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.743137 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.757752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933120 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933439 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:07 crc kubenswrapper[4869]: I0202 14:52:07.933471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035675 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035805 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.035830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037015 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.037863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.057233 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"dnsmasq-dns-554567b4f7-wgl4k\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.064332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:08 crc kubenswrapper[4869]: I0202 14:52:08.524153 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:09 crc kubenswrapper[4869]: I0202 14:52:09.333515 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerID="bc9dde5f802202af7a85f0bef2eac6285904a7c6caf12c1643635106506e9002" exitCode=0 Feb 02 14:52:09 crc kubenswrapper[4869]: I0202 14:52:09.333572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerDied","Data":"bc9dde5f802202af7a85f0bef2eac6285904a7c6caf12c1643635106506e9002"} Feb 02 14:52:09 crc kubenswrapper[4869]: I0202 14:52:09.333973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerStarted","Data":"a735d4f93e2231ae2a788ee232093dfbb8748b09065788ca6cc6337170b33936"} Feb 02 14:52:10 crc kubenswrapper[4869]: I0202 14:52:10.344799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerStarted","Data":"21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266"} Feb 02 14:52:10 crc kubenswrapper[4869]: I0202 14:52:10.345418 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:10 crc kubenswrapper[4869]: I0202 14:52:10.370374 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podStartSLOduration=3.370346653 podStartE2EDuration="3.370346653s" podCreationTimestamp="2026-02-02 14:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:10.362196542 +0000 UTC m=+1132.006833312" watchObservedRunningTime="2026-02-02 14:52:10.370346653 +0000 UTC m=+1132.014983423" Feb 02 14:52:14 crc kubenswrapper[4869]: I0202 14:52:14.861179 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.251601 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.253284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.278984 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.305291 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.305372 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.356985 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.358255 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.362425 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.364303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387357 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387461 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.387540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.488930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.489416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.489501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.489546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.490245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.490432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.514750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"cinder-9bcf-account-create-update-pprmg\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.519229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"cinder-db-create-wzwcn\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.523342 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.525003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.528188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.528336 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.529104 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.529762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.536300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.574371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.591168 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.591334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.591392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.612462 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.614028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.624394 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.636766 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.654388 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697733 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.697780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.710183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.724725 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.725230 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.715164 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.727036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.736079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.736190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.738717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"keystone-db-sync-6zf6z\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.785079 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.787687 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.798165 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.799627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.829829 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.831872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.831953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.832281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.834518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.834526 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.850990 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.852578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.859110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"barbican-db-create-kp9g2\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.860417 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.862945 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"barbican-2561-account-create-update-zwwnx\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.868021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934854 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.934965 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.935936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:15 crc kubenswrapper[4869]: I0202 14:52:15.958608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"neutron-db-create-bznrb\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.037942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.038159 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.042396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.068727 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"neutron-f93f-account-create-update-qbxcg\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.074301 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.098783 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.141272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.160746 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.185849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.398514 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.405520 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzwcn" event={"ID":"66e52e3f-cffb-44c2-9532-d645fa630d61","Type":"ContainerStarted","Data":"c1ca2e36cdbb37e9d7c021194e66d30657f92800b5c11ae7fe9202fd45a062ad"} Feb 02 14:52:16 crc kubenswrapper[4869]: W0202 14:52:16.422005 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a91413a_aa7c_4564_bf72_53071981cd62.slice/crio-bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77 WatchSource:0}: Error finding container bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77: Status 404 returned error can't find the container with id bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77 Feb 02 14:52:16 crc kubenswrapper[4869]: I0202 14:52:16.581019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.052588 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 14:52:17 crc kubenswrapper[4869]: W0202 14:52:17.053840 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe36a818_4a20_4330_ade7_225a479d7e98.slice/crio-23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a WatchSource:0}: Error finding container 23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a: Status 404 returned error can't find the container with id 23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.150370 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.158858 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.303134 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.454273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kp9g2" event={"ID":"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0","Type":"ContainerStarted","Data":"02f0152486f6d15e27ee638bd4a0ad31fa89aef01cbf65c375e9ea7c3754cb1c"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.456690 4869 generic.go:334] "Generic (PLEG): container finished" podID="8a91413a-aa7c-4564-bf72-53071981cd62" containerID="8ad30a46b6571b102d653acdd91c3117aa9caffad9f46651f8d10f3bce6d1da5" exitCode=0 Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.456765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9bcf-account-create-update-pprmg" event={"ID":"8a91413a-aa7c-4564-bf72-53071981cd62","Type":"ContainerDied","Data":"8ad30a46b6571b102d653acdd91c3117aa9caffad9f46651f8d10f3bce6d1da5"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.456798 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9bcf-account-create-update-pprmg" event={"ID":"8a91413a-aa7c-4564-bf72-53071981cd62","Type":"ContainerStarted","Data":"bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.459426 4869 generic.go:334] "Generic (PLEG): container finished" podID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerID="a67405c792b46e1c7a87b10db412f756b77b32607171121e6cfbf4745d19567f" exitCode=0 Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.459503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzwcn" event={"ID":"66e52e3f-cffb-44c2-9532-d645fa630d61","Type":"ContainerDied","Data":"a67405c792b46e1c7a87b10db412f756b77b32607171121e6cfbf4745d19567f"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.462855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bznrb" event={"ID":"b5268e6d-82fe-45d8-a243-d37b326346a6","Type":"ContainerStarted","Data":"e0cb2f6956af5d713875e9a9977db1a357539fa9317755fad15a287086493ed9"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.464271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f93f-account-create-update-qbxcg" event={"ID":"6aa7f6b2-de14-408c-8960-662c2ab0e481","Type":"ContainerStarted","Data":"5a3eac8a14a3519fc3baa33a188a36940d29e94a2e52fef88f1631e6608a40a7"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.465479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerStarted","Data":"00a4cfc7849d2f9ea55fa2dd3fb70b062afc95bf4b2bcbb1f6797199fd69f8e6"} Feb 02 14:52:17 crc kubenswrapper[4869]: I0202 14:52:17.540139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2561-account-create-update-zwwnx" event={"ID":"be36a818-4a20-4330-ade7-225a479d7e98","Type":"ContainerStarted","Data":"23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.070188 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.188562 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.191349 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-4c4vl" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" containerID="cri-o://b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950" gracePeriod=10 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.485314 4869 generic.go:334] "Generic (PLEG): container finished" podID="be36a818-4a20-4330-ade7-225a479d7e98" containerID="bc23c4af30b56127451b57906851e79c3c56f83ff81cbe94961025e57448181c" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.485502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2561-account-create-update-zwwnx" event={"ID":"be36a818-4a20-4330-ade7-225a479d7e98","Type":"ContainerDied","Data":"bc23c4af30b56127451b57906851e79c3c56f83ff81cbe94961025e57448181c"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.491331 4869 generic.go:334] "Generic (PLEG): container finished" podID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerID="fd9a1056bb847e46dd277ee512ce8a86dedc30d17b4d1ccaa855457de2552b81" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.491863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kp9g2" event={"ID":"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0","Type":"ContainerDied","Data":"fd9a1056bb847e46dd277ee512ce8a86dedc30d17b4d1ccaa855457de2552b81"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.499489 4869 generic.go:334] "Generic (PLEG): container finished" podID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerID="b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.499661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerDied","Data":"b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.503637 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerID="213e1848995e356634b595c82a82047cb0a5c02652baad5bea2863f82f47bdbc" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.503733 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bznrb" event={"ID":"b5268e6d-82fe-45d8-a243-d37b326346a6","Type":"ContainerDied","Data":"213e1848995e356634b595c82a82047cb0a5c02652baad5bea2863f82f47bdbc"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.516393 4869 generic.go:334] "Generic (PLEG): container finished" podID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerID="59d9f27d8d1ae8627d4c79fa51d4258f445b3484686b6e2d609c49071e26d3ff" exitCode=0 Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.516745 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f93f-account-create-update-qbxcg" event={"ID":"6aa7f6b2-de14-408c-8960-662c2ab0e481","Type":"ContainerDied","Data":"59d9f27d8d1ae8627d4c79fa51d4258f445b3484686b6e2d609c49071e26d3ff"} Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.726825 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.837670 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") pod \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\" (UID: \"54b21918-ca4b-429c-8a6e-dd4bb0240efd\") " Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.848667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt" (OuterVolumeSpecName: "kube-api-access-2s9zt") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "kube-api-access-2s9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.903933 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.925160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.939961 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.940007 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s9zt\" (UniqueName: \"kubernetes.io/projected/54b21918-ca4b-429c-8a6e-dd4bb0240efd-kube-api-access-2s9zt\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.940020 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:18 crc kubenswrapper[4869]: I0202 14:52:18.951121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.003068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.019544 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config" (OuterVolumeSpecName: "config") pod "54b21918-ca4b-429c-8a6e-dd4bb0240efd" (UID: "54b21918-ca4b-429c-8a6e-dd4bb0240efd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.043402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") pod \"66e52e3f-cffb-44c2-9532-d645fa630d61\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.043522 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") pod \"66e52e3f-cffb-44c2-9532-d645fa630d61\" (UID: \"66e52e3f-cffb-44c2-9532-d645fa630d61\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.044263 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.044288 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b21918-ca4b-429c-8a6e-dd4bb0240efd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.051408 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66e52e3f-cffb-44c2-9532-d645fa630d61" (UID: "66e52e3f-cffb-44c2-9532-d645fa630d61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.051537 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl" (OuterVolumeSpecName: "kube-api-access-qfwnl") pod "66e52e3f-cffb-44c2-9532-d645fa630d61" (UID: "66e52e3f-cffb-44c2-9532-d645fa630d61"). InnerVolumeSpecName "kube-api-access-qfwnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.136007 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.146074 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66e52e3f-cffb-44c2-9532-d645fa630d61-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.146114 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfwnl\" (UniqueName: \"kubernetes.io/projected/66e52e3f-cffb-44c2-9532-d645fa630d61-kube-api-access-qfwnl\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.247121 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") pod \"8a91413a-aa7c-4564-bf72-53071981cd62\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.247185 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") pod \"8a91413a-aa7c-4564-bf72-53071981cd62\" (UID: \"8a91413a-aa7c-4564-bf72-53071981cd62\") " Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.249583 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a91413a-aa7c-4564-bf72-53071981cd62" (UID: "8a91413a-aa7c-4564-bf72-53071981cd62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.255774 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc" (OuterVolumeSpecName: "kube-api-access-9v9wc") pod "8a91413a-aa7c-4564-bf72-53071981cd62" (UID: "8a91413a-aa7c-4564-bf72-53071981cd62"). InnerVolumeSpecName "kube-api-access-9v9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.349193 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a91413a-aa7c-4564-bf72-53071981cd62-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.349245 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9wc\" (UniqueName: \"kubernetes.io/projected/8a91413a-aa7c-4564-bf72-53071981cd62-kube-api-access-9v9wc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.530221 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-4c4vl" event={"ID":"54b21918-ca4b-429c-8a6e-dd4bb0240efd","Type":"ContainerDied","Data":"ee3bdcdcebe4cf916bdc1a9e9914fdc757fcd93e8090271d1331cae80e239cc8"} Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.530275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-4c4vl" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.530283 4869 scope.go:117] "RemoveContainer" containerID="b3ead3c7387dc43b885947ba69cc1b8368881b48f975e77ebf577ea458662950" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.536300 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wzwcn" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.536315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wzwcn" event={"ID":"66e52e3f-cffb-44c2-9532-d645fa630d61","Type":"ContainerDied","Data":"c1ca2e36cdbb37e9d7c021194e66d30657f92800b5c11ae7fe9202fd45a062ad"} Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.536503 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ca2e36cdbb37e9d7c021194e66d30657f92800b5c11ae7fe9202fd45a062ad" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.538265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9bcf-account-create-update-pprmg" event={"ID":"8a91413a-aa7c-4564-bf72-53071981cd62","Type":"ContainerDied","Data":"bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77"} Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.538312 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6f5ffd5929eae334eb780d777783d98ec24f71372fb133e4dda6530c497a77" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.538328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9bcf-account-create-update-pprmg" Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.569043 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:52:19 crc kubenswrapper[4869]: I0202 14:52:19.579786 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-4c4vl"] Feb 02 14:52:21 crc kubenswrapper[4869]: I0202 14:52:21.477520 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" path="/var/lib/kubelet/pods/54b21918-ca4b-429c-8a6e-dd4bb0240efd/volumes" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.114777 4869 scope.go:117] "RemoveContainer" containerID="d4bc95d2879e70b645a2e7e235f1fbdcdf5fe19a1ef7176a88d572c086b1c57b" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.329771 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.371278 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.383794 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.408813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470026 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") pod \"be36a818-4a20-4330-ade7-225a479d7e98\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470154 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") pod \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470189 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") pod \"b5268e6d-82fe-45d8-a243-d37b326346a6\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") pod \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\" (UID: \"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") pod \"be36a818-4a20-4330-ade7-225a479d7e98\" (UID: \"be36a818-4a20-4330-ade7-225a479d7e98\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.470473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") pod \"b5268e6d-82fe-45d8-a243-d37b326346a6\" (UID: \"b5268e6d-82fe-45d8-a243-d37b326346a6\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.471269 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" (UID: "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.471353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5268e6d-82fe-45d8-a243-d37b326346a6" (UID: "b5268e6d-82fe-45d8-a243-d37b326346a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.473620 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be36a818-4a20-4330-ade7-225a479d7e98" (UID: "be36a818-4a20-4330-ade7-225a479d7e98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.478048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z" (OuterVolumeSpecName: "kube-api-access-nj96z") pod "be36a818-4a20-4330-ade7-225a479d7e98" (UID: "be36a818-4a20-4330-ade7-225a479d7e98"). InnerVolumeSpecName "kube-api-access-nj96z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.480333 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl" (OuterVolumeSpecName: "kube-api-access-vbxxl") pod "b5268e6d-82fe-45d8-a243-d37b326346a6" (UID: "b5268e6d-82fe-45d8-a243-d37b326346a6"). InnerVolumeSpecName "kube-api-access-vbxxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.488374 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms" (OuterVolumeSpecName: "kube-api-access-dgfms") pod "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" (UID: "dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0"). InnerVolumeSpecName "kube-api-access-dgfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") pod \"6aa7f6b2-de14-408c-8960-662c2ab0e481\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577354 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") pod \"6aa7f6b2-de14-408c-8960-662c2ab0e481\" (UID: \"6aa7f6b2-de14-408c-8960-662c2ab0e481\") " Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577822 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6aa7f6b2-de14-408c-8960-662c2ab0e481" (UID: "6aa7f6b2-de14-408c-8960-662c2ab0e481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577967 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be36a818-4a20-4330-ade7-225a479d7e98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.577998 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxxl\" (UniqueName: \"kubernetes.io/projected/b5268e6d-82fe-45d8-a243-d37b326346a6-kube-api-access-vbxxl\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578016 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj96z\" (UniqueName: \"kubernetes.io/projected/be36a818-4a20-4330-ade7-225a479d7e98-kube-api-access-nj96z\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578028 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgfms\" (UniqueName: \"kubernetes.io/projected/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-kube-api-access-dgfms\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578039 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5268e6d-82fe-45d8-a243-d37b326346a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578050 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.578062 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa7f6b2-de14-408c-8960-662c2ab0e481-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.580957 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4" (OuterVolumeSpecName: "kube-api-access-zm6b4") pod "6aa7f6b2-de14-408c-8960-662c2ab0e481" (UID: "6aa7f6b2-de14-408c-8960-662c2ab0e481"). InnerVolumeSpecName "kube-api-access-zm6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.591060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kp9g2" event={"ID":"dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0","Type":"ContainerDied","Data":"02f0152486f6d15e27ee638bd4a0ad31fa89aef01cbf65c375e9ea7c3754cb1c"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.591099 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kp9g2" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.591111 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f0152486f6d15e27ee638bd4a0ad31fa89aef01cbf65c375e9ea7c3754cb1c" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.598552 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bznrb" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.598835 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bznrb" event={"ID":"b5268e6d-82fe-45d8-a243-d37b326346a6","Type":"ContainerDied","Data":"e0cb2f6956af5d713875e9a9977db1a357539fa9317755fad15a287086493ed9"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.598882 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0cb2f6956af5d713875e9a9977db1a357539fa9317755fad15a287086493ed9" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.602442 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f93f-account-create-update-qbxcg" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.602442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f93f-account-create-update-qbxcg" event={"ID":"6aa7f6b2-de14-408c-8960-662c2ab0e481","Type":"ContainerDied","Data":"5a3eac8a14a3519fc3baa33a188a36940d29e94a2e52fef88f1631e6608a40a7"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.602575 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3eac8a14a3519fc3baa33a188a36940d29e94a2e52fef88f1631e6608a40a7" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.606136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerStarted","Data":"cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.610242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2561-account-create-update-zwwnx" event={"ID":"be36a818-4a20-4330-ade7-225a479d7e98","Type":"ContainerDied","Data":"23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a"} Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.610311 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23af9abd6b70f5b98eaa18710f2a68df5b51b8f09b91ff369dd982c80b43330a" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.610393 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2561-account-create-update-zwwnx" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.628662 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6zf6z" podStartSLOduration=2.077299385 podStartE2EDuration="8.628612776s" podCreationTimestamp="2026-02-02 14:52:15 +0000 UTC" firstStartedPulling="2026-02-02 14:52:16.641895427 +0000 UTC m=+1138.286532197" lastFinishedPulling="2026-02-02 14:52:23.193208818 +0000 UTC m=+1144.837845588" observedRunningTime="2026-02-02 14:52:23.627537699 +0000 UTC m=+1145.272174479" watchObservedRunningTime="2026-02-02 14:52:23.628612776 +0000 UTC m=+1145.273249556" Feb 02 14:52:23 crc kubenswrapper[4869]: I0202 14:52:23.680586 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm6b4\" (UniqueName: \"kubernetes.io/projected/6aa7f6b2-de14-408c-8960-662c2ab0e481-kube-api-access-zm6b4\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:27 crc kubenswrapper[4869]: I0202 14:52:27.652691 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerID="cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89" exitCode=0 Feb 02 14:52:27 crc kubenswrapper[4869]: I0202 14:52:27.652769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerDied","Data":"cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89"} Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.052024 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.197530 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") pod \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.197674 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") pod \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.197710 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") pod \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\" (UID: \"2b3583d5-e064-4a64-89ba-a97a7fcc993d\") " Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.205489 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v" (OuterVolumeSpecName: "kube-api-access-df86v") pod "2b3583d5-e064-4a64-89ba-a97a7fcc993d" (UID: "2b3583d5-e064-4a64-89ba-a97a7fcc993d"). InnerVolumeSpecName "kube-api-access-df86v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.228455 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3583d5-e064-4a64-89ba-a97a7fcc993d" (UID: "2b3583d5-e064-4a64-89ba-a97a7fcc993d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.267207 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data" (OuterVolumeSpecName: "config-data") pod "2b3583d5-e064-4a64-89ba-a97a7fcc993d" (UID: "2b3583d5-e064-4a64-89ba-a97a7fcc993d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.300504 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.300616 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3583d5-e064-4a64-89ba-a97a7fcc993d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.300633 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df86v\" (UniqueName: \"kubernetes.io/projected/2b3583d5-e064-4a64-89ba-a97a7fcc993d-kube-api-access-df86v\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.677460 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6zf6z" event={"ID":"2b3583d5-e064-4a64-89ba-a97a7fcc993d","Type":"ContainerDied","Data":"00a4cfc7849d2f9ea55fa2dd3fb70b062afc95bf4b2bcbb1f6797199fd69f8e6"} Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.678014 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a4cfc7849d2f9ea55fa2dd3fb70b062afc95bf4b2bcbb1f6797199fd69f8e6" Feb 02 14:52:29 crc kubenswrapper[4869]: I0202 14:52:29.677536 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6zf6z" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.136489 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137020 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be36a818-4a20-4330-ade7-225a479d7e98" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="be36a818-4a20-4330-ade7-225a479d7e98" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137048 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137055 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137072 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137086 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137107 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137117 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137140 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="init" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137148 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="init" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137163 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137172 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137190 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137199 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: E0202 14:52:30.137239 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerName="keystone-db-sync" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137249 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerName="keystone-db-sync" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137414 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b21918-ca4b-429c-8a6e-dd4bb0240efd" containerName="dnsmasq-dns" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137427 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137440 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137448 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137455 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" containerName="mariadb-database-create" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137464 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="be36a818-4a20-4330-ade7-225a479d7e98" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137474 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" containerName="keystone-db-sync" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.137480 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" containerName="mariadb-account-create-update" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.138220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.153933 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.154933 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.155217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.155273 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.155376 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.156894 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.159412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.188322 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.198342 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.219844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.219922 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220172 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220313 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.220546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322527 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322703 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322732 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.322831 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.340305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.356861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.357409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.362356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.370809 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.388677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"keystone-bootstrap-f4vkc\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.425785 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.425890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.425938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.426032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.426075 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.426974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.431189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.437894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.443993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.473901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"dnsmasq-dns-67795cd9-j8z7x\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.482408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.490508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.574322 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.576232 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.576323 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.598864 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.599360 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.599738 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-92dp9" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.637607 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.638893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.680847 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.681213 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.681372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9hgj2" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.720949 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738495 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738525 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.738755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.793997 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.842880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.842962 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843107 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843160 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.843284 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.857039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.870812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.874442 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.874608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.875731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.876369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.881537 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.881679 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.896671 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.905941 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.910864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.913435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.919992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"cinder-db-sync-s2dwg\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.927608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.930854 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.939092 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"neutron-db-sync-hz9pj\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.942024 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pg4t9" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.942263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 14:52:30 crc kubenswrapper[4869]: I0202 14:52:30.942324 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.025773 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031657 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031747 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031836 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031850 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.031867 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.069023 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.095594 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.107092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.112702 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.113395 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.115512 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.120121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2d6ss" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.122541 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.127146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.134890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139168 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139324 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139514 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139731 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139854 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.139946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140061 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.140278 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.148161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.148412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.161051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.166291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.167965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.183957 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.196826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.198950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"ceilometer-0\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242633 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242736 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242959 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.242983 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.244902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.245510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.245788 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.247474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.248362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.252766 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.255456 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.266887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.268042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.269161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.274146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"dnsmasq-dns-5b6dbdb6f5-bzm58\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.274736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"placement-db-sync-q447q\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.285641 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"barbican-db-sync-4fqzr\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.367513 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.378044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.504475 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.517450 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:31 crc kubenswrapper[4869]: W0202 14:52:31.533314 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8205ba1c_9c1b_4d76_83f5_2f30dba11533.slice/crio-786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce WatchSource:0}: Error finding container 786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce: Status 404 returned error can't find the container with id 786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.571412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.597795 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.777716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" event={"ID":"8205ba1c-9c1b-4d76-83f5-2f30dba11533","Type":"ContainerStarted","Data":"786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce"} Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.782215 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerStarted","Data":"180b224a231cda3b4ae69afc28110045d922067babdece8f42149ecb73f011f0"} Feb 02 14:52:31 crc kubenswrapper[4869]: I0202 14:52:31.915671 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.051628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.214410 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.237101 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.415445 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.426803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.797206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerStarted","Data":"91133cd950cbaf0a2fd654c7a3e7af936c27a7b6526630fb20d70ac6c178f469"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.824159 4869 generic.go:334] "Generic (PLEG): container finished" podID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerID="165e6d41cdbda9554672f48bfbf6dae797c409b00fe7e4b925b58548cd537f9b" exitCode=0 Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.824240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" event={"ID":"8205ba1c-9c1b-4d76-83f5-2f30dba11533","Type":"ContainerDied","Data":"165e6d41cdbda9554672f48bfbf6dae797c409b00fe7e4b925b58548cd537f9b"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.873542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerStarted","Data":"8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.873620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerStarted","Data":"7dd80f0858d5d331b1948ea1170d5424dc4e4ccf69aa8a84169b4800d0e4fc13"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.913787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerStarted","Data":"078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287"} Feb 02 14:52:32 crc kubenswrapper[4869]: I0202 14:52:32.968107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerStarted","Data":"9d20104835b08533de4169d71a96c0b24b6f27636df1686a4f2724353347f5f4"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.039544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerStarted","Data":"86f6ff04cbc086ccbfd2e84539b1d96a49f77aa4c0aa0c0898599df70d3ebe0a"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.055389 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hz9pj" podStartSLOduration=3.055346829 podStartE2EDuration="3.055346829s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:32.913468876 +0000 UTC m=+1154.558105646" watchObservedRunningTime="2026-02-02 14:52:33.055346829 +0000 UTC m=+1154.699983599" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.104123 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f4vkc" podStartSLOduration=3.104095006 podStartE2EDuration="3.104095006s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:33.030811431 +0000 UTC m=+1154.675448221" watchObservedRunningTime="2026-02-02 14:52:33.104095006 +0000 UTC m=+1154.748731776" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.104881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerStarted","Data":"ee7fd35cc885ef9baea8bed6be792f654b41db4b87960643e8aaaa20fc9891a4"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.122841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"9a54c86921d5b0ef544bfd0a64a504e7bbbc4ab3d0006b551a598232317f2a2b"} Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.131809 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.600662 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723712 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.723877 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") pod \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\" (UID: \"8205ba1c-9c1b-4d76-83f5-2f30dba11533\") " Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.736257 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r" (OuterVolumeSpecName: "kube-api-access-b5v5r") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "kube-api-access-b5v5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.771517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config" (OuterVolumeSpecName: "config") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.772346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.785031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.804827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8205ba1c-9c1b-4d76-83f5-2f30dba11533" (UID: "8205ba1c-9c1b-4d76-83f5-2f30dba11533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828221 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828260 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5v5r\" (UniqueName: \"kubernetes.io/projected/8205ba1c-9c1b-4d76-83f5-2f30dba11533-kube-api-access-b5v5r\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828275 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828284 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:33 crc kubenswrapper[4869]: I0202 14:52:33.828296 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8205ba1c-9c1b-4d76-83f5-2f30dba11533-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.150899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" event={"ID":"8205ba1c-9c1b-4d76-83f5-2f30dba11533","Type":"ContainerDied","Data":"786069f37ed99238fa7dc1ce5b4dad818711ea263837067545c0291419cb79ce"} Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.151404 4869 scope.go:117] "RemoveContainer" containerID="165e6d41cdbda9554672f48bfbf6dae797c409b00fe7e4b925b58548cd537f9b" Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.151257 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-j8z7x" Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.161413 4869 generic.go:334] "Generic (PLEG): container finished" podID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerID="5b057f5c2556a8f58e337485429c58bd6088b4c173270d5455938195918cef0b" exitCode=0 Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.163532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerDied","Data":"5b057f5c2556a8f58e337485429c58bd6088b4c173270d5455938195918cef0b"} Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.284259 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:34 crc kubenswrapper[4869]: I0202 14:52:34.298967 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-j8z7x"] Feb 02 14:52:35 crc kubenswrapper[4869]: I0202 14:52:35.190053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerStarted","Data":"a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974"} Feb 02 14:52:35 crc kubenswrapper[4869]: I0202 14:52:35.190509 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:35 crc kubenswrapper[4869]: I0202 14:52:35.474753 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" path="/var/lib/kubelet/pods/8205ba1c-9c1b-4d76-83f5-2f30dba11533/volumes" Feb 02 14:52:38 crc kubenswrapper[4869]: I0202 14:52:38.229302 4869 generic.go:334] "Generic (PLEG): container finished" podID="02317eeb-3381-4883-b345-2ec84b402aae" containerID="078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287" exitCode=0 Feb 02 14:52:38 crc kubenswrapper[4869]: I0202 14:52:38.229384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerDied","Data":"078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287"} Feb 02 14:52:38 crc kubenswrapper[4869]: I0202 14:52:38.258297 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" podStartSLOduration=8.258265819 podStartE2EDuration="8.258265819s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:35.215278278 +0000 UTC m=+1156.859915078" watchObservedRunningTime="2026-02-02 14:52:38.258265819 +0000 UTC m=+1159.902902589" Feb 02 14:52:41 crc kubenswrapper[4869]: I0202 14:52:41.519684 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:52:41 crc kubenswrapper[4869]: I0202 14:52:41.627062 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:41 crc kubenswrapper[4869]: I0202 14:52:41.627417 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" containerID="cri-o://21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266" gracePeriod=10 Feb 02 14:52:42 crc kubenswrapper[4869]: I0202 14:52:42.283718 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerID="21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266" exitCode=0 Feb 02 14:52:42 crc kubenswrapper[4869]: I0202 14:52:42.283784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerDied","Data":"21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266"} Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.065577 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.526820 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.711791 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.711881 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.712454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") pod \"02317eeb-3381-4883-b345-2ec84b402aae\" (UID: \"02317eeb-3381-4883-b345-2ec84b402aae\") " Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.725657 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts" (OuterVolumeSpecName: "scripts") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.734888 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5" (OuterVolumeSpecName: "kube-api-access-txtq5") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "kube-api-access-txtq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.750242 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.766184 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.808347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data" (OuterVolumeSpecName: "config-data") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814135 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02317eeb-3381-4883-b345-2ec84b402aae" (UID: "02317eeb-3381-4883-b345-2ec84b402aae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814699 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814755 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814767 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814778 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txtq5\" (UniqueName: \"kubernetes.io/projected/02317eeb-3381-4883-b345-2ec84b402aae-kube-api-access-txtq5\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814790 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:43 crc kubenswrapper[4869]: I0202 14:52:43.814799 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/02317eeb-3381-4883-b345-2ec84b402aae-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.306187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f4vkc" event={"ID":"02317eeb-3381-4883-b345-2ec84b402aae","Type":"ContainerDied","Data":"180b224a231cda3b4ae69afc28110045d922067babdece8f42149ecb73f011f0"} Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.306646 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="180b224a231cda3b4ae69afc28110045d922067babdece8f42149ecb73f011f0" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.306434 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f4vkc" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.744700 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.762807 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f4vkc"] Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844117 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 14:52:44 crc kubenswrapper[4869]: E0202 14:52:44.844536 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02317eeb-3381-4883-b345-2ec84b402aae" containerName="keystone-bootstrap" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02317eeb-3381-4883-b345-2ec84b402aae" containerName="keystone-bootstrap" Feb 02 14:52:44 crc kubenswrapper[4869]: E0202 14:52:44.844599 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerName="init" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844607 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerName="init" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844785 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="02317eeb-3381-4883-b345-2ec84b402aae" containerName="keystone-bootstrap" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.844817 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8205ba1c-9c1b-4d76-83f5-2f30dba11533" containerName="init" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.845457 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.848600 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849057 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849071 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.849520 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:52:44 crc kubenswrapper[4869]: I0202 14:52:44.863401 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.041881 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042684 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.042779 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.144969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.145316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.151102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.152001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.155283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.168570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.170124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.170439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"keystone-bootstrap-zxtsl\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.220699 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.304240 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.304308 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:52:45 crc kubenswrapper[4869]: I0202 14:52:45.476781 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02317eeb-3381-4883-b345-2ec84b402aae" path="/var/lib/kubelet/pods/02317eeb-3381-4883-b345-2ec84b402aae/volumes" Feb 02 14:52:48 crc kubenswrapper[4869]: I0202 14:52:48.064965 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 02 14:52:53 crc kubenswrapper[4869]: I0202 14:52:53.065397 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 02 14:52:53 crc kubenswrapper[4869]: I0202 14:52:53.066289 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:56 crc kubenswrapper[4869]: E0202 14:52:56.722495 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 14:52:56 crc kubenswrapper[4869]: E0202 14:52:56.723394 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9jzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-s2dwg_openstack(f0e63b99-6d06-44ea-a061-b9f79551126a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 14:52:56 crc kubenswrapper[4869]: E0202 14:52:56.724994 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-s2dwg" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.040245 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.122467 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.125849 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") pod \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\" (UID: \"cc6051dd-8fa8-4c0b-bd98-9d180754d64a\") " Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.136519 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw" (OuterVolumeSpecName: "kube-api-access-zw9pw") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "kube-api-access-zw9pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.202120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config" (OuterVolumeSpecName: "config") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.228604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.228954 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw9pw\" (UniqueName: \"kubernetes.io/projected/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-kube-api-access-zw9pw\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.228988 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.229001 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.233976 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.250022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.276961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc6051dd-8fa8-4c0b-bd98-9d180754d64a" (UID: "cc6051dd-8fa8-4c0b-bd98-9d180754d64a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.330732 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.330774 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc6051dd-8fa8-4c0b-bd98-9d180754d64a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.428242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerStarted","Data":"7ec50d3c95d3d2c9d96e976502e27bc356d7e820fe0c2796a704965f259c6dc6"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.432125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerStarted","Data":"8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.435528 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.438979 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" event={"ID":"cc6051dd-8fa8-4c0b-bd98-9d180754d64a","Type":"ContainerDied","Data":"a735d4f93e2231ae2a788ee232093dfbb8748b09065788ca6cc6337170b33936"} Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.439011 4869 scope.go:117] "RemoveContainer" containerID="21d38bf794f66e2ad9e787fa612464d3a84fc2645f8605570d7efe766c774266" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.439146 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-wgl4k" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.449583 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4fqzr" podStartSLOduration=3.206805727 podStartE2EDuration="27.449555836s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:32.455526349 +0000 UTC m=+1154.100163119" lastFinishedPulling="2026-02-02 14:52:56.698276458 +0000 UTC m=+1178.342913228" observedRunningTime="2026-02-02 14:52:57.447232219 +0000 UTC m=+1179.091868989" watchObservedRunningTime="2026-02-02 14:52:57.449555836 +0000 UTC m=+1179.094192606" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.454517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerStarted","Data":"da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36"} Feb 02 14:52:57 crc kubenswrapper[4869]: E0202 14:52:57.455850 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-s2dwg" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.508681 4869 scope.go:117] "RemoveContainer" containerID="bc9dde5f802202af7a85f0bef2eac6285904a7c6caf12c1643635106506e9002" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.524481 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q447q" podStartSLOduration=3.121045594 podStartE2EDuration="27.52444974s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:32.269130095 +0000 UTC m=+1153.913766865" lastFinishedPulling="2026-02-02 14:52:56.672534241 +0000 UTC m=+1178.317171011" observedRunningTime="2026-02-02 14:52:57.495261858 +0000 UTC m=+1179.139898618" watchObservedRunningTime="2026-02-02 14:52:57.52444974 +0000 UTC m=+1179.169086510" Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.584574 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:57 crc kubenswrapper[4869]: I0202 14:52:57.596863 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-wgl4k"] Feb 02 14:52:58 crc kubenswrapper[4869]: I0202 14:52:58.479468 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerStarted","Data":"f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656"} Feb 02 14:52:58 crc kubenswrapper[4869]: I0202 14:52:58.503168 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zxtsl" podStartSLOduration=14.503145098 podStartE2EDuration="14.503145098s" podCreationTimestamp="2026-02-02 14:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:52:58.498160325 +0000 UTC m=+1180.142797095" watchObservedRunningTime="2026-02-02 14:52:58.503145098 +0000 UTC m=+1180.147781858" Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.475956 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" path="/var/lib/kubelet/pods/cc6051dd-8fa8-4c0b-bd98-9d180754d64a/volumes" Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.498742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121"} Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.500645 4869 generic.go:334] "Generic (PLEG): container finished" podID="367199b6-3340-454e-acc5-478f9b35b2df" containerID="8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2" exitCode=0 Feb 02 14:52:59 crc kubenswrapper[4869]: I0202 14:52:59.502212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerDied","Data":"8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2"} Feb 02 14:53:00 crc kubenswrapper[4869]: I0202 14:53:00.988115 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.004660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") pod \"367199b6-3340-454e-acc5-478f9b35b2df\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.004750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") pod \"367199b6-3340-454e-acc5-478f9b35b2df\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.004936 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") pod \"367199b6-3340-454e-acc5-478f9b35b2df\" (UID: \"367199b6-3340-454e-acc5-478f9b35b2df\") " Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.074633 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt" (OuterVolumeSpecName: "kube-api-access-tdbnt") pod "367199b6-3340-454e-acc5-478f9b35b2df" (UID: "367199b6-3340-454e-acc5-478f9b35b2df"). InnerVolumeSpecName "kube-api-access-tdbnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.095107 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config" (OuterVolumeSpecName: "config") pod "367199b6-3340-454e-acc5-478f9b35b2df" (UID: "367199b6-3340-454e-acc5-478f9b35b2df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.139207 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.139508 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdbnt\" (UniqueName: \"kubernetes.io/projected/367199b6-3340-454e-acc5-478f9b35b2df-kube-api-access-tdbnt\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.172548 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "367199b6-3340-454e-acc5-478f9b35b2df" (UID: "367199b6-3340-454e-acc5-478f9b35b2df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.242767 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367199b6-3340-454e-acc5-478f9b35b2df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.539818 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerID="da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36" exitCode=0 Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.539989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerDied","Data":"da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36"} Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.546262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hz9pj" event={"ID":"367199b6-3340-454e-acc5-478f9b35b2df","Type":"ContainerDied","Data":"7dd80f0858d5d331b1948ea1170d5424dc4e4ccf69aa8a84169b4800d0e4fc13"} Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.546326 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd80f0858d5d331b1948ea1170d5424dc4e4ccf69aa8a84169b4800d0e4fc13" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.546431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hz9pj" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.551414 4869 generic.go:334] "Generic (PLEG): container finished" podID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerID="f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656" exitCode=0 Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.551484 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerDied","Data":"f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656"} Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.845870 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:01 crc kubenswrapper[4869]: E0202 14:53:01.848329 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.849514 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" Feb 02 14:53:01 crc kubenswrapper[4869]: E0202 14:53:01.851996 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="init" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852074 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="init" Feb 02 14:53:01 crc kubenswrapper[4869]: E0202 14:53:01.852121 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367199b6-3340-454e-acc5-478f9b35b2df" containerName="neutron-db-sync" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852163 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="367199b6-3340-454e-acc5-478f9b35b2df" containerName="neutron-db-sync" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852894 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="367199b6-3340-454e-acc5-478f9b35b2df" containerName="neutron-db-sync" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.852980 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6051dd-8fa8-4c0b-bd98-9d180754d64a" containerName="dnsmasq-dns" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.854502 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858207 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858243 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858299 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.858322 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.859065 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.901144 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.903154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.908659 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9hgj2" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.915267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.915571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.915761 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.925590 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.953374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961615 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.961766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.972104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.975960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.977290 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.977588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:01 crc kubenswrapper[4869]: I0202 14:53:01.990001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"dnsmasq-dns-5f66db59b9-fbxcp\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.064890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.071267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.072570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.073531 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.080900 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.091932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"neutron-bb87b4954-l5h9p\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.229897 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.252783 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.572484 4869 generic.go:334] "Generic (PLEG): container finished" podID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerID="8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6" exitCode=0 Feb 02 14:53:02 crc kubenswrapper[4869]: I0202 14:53:02.572755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerDied","Data":"8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6"} Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.400925 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.410131 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.412936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.418634 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.441768 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529879 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.529944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.530009 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.530050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.631361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.631909 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.631982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632066 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632105 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.632124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.639990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.640045 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.642259 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.644706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.654275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.654515 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.655064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"neutron-6c4d7559c7-79dhq\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:04 crc kubenswrapper[4869]: I0202 14:53:04.744225 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.628393 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.628598 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q447q" event={"ID":"2a5f9f47-1ba0-4d37-8597-874a62d9045e","Type":"ContainerDied","Data":"91133cd950cbaf0a2fd654c7a3e7af936c27a7b6526630fb20d70ac6c178f469"} Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.629138 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91133cd950cbaf0a2fd654c7a3e7af936c27a7b6526630fb20d70ac6c178f469" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.630797 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zxtsl" event={"ID":"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b","Type":"ContainerDied","Data":"7ec50d3c95d3d2c9d96e976502e27bc356d7e820fe0c2796a704965f259c6dc6"} Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.630822 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec50d3c95d3d2c9d96e976502e27bc356d7e820fe0c2796a704965f259c6dc6" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.632174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4fqzr" event={"ID":"818ee387-cf73-45bc-8925-c234d5fd8ee3","Type":"ContainerDied","Data":"ee7fd35cc885ef9baea8bed6be792f654b41db4b87960643e8aaaa20fc9891a4"} Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.632210 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7fd35cc885ef9baea8bed6be792f654b41db4b87960643e8aaaa20fc9891a4" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.663007 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.685609 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.776964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777024 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777109 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") pod \"818ee387-cf73-45bc-8925-c234d5fd8ee3\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777169 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777211 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777233 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777256 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777380 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") pod \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\" (UID: \"f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") pod \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\" (UID: \"2a5f9f47-1ba0-4d37-8597-874a62d9045e\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777490 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") pod \"818ee387-cf73-45bc-8925-c234d5fd8ee3\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.777513 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") pod \"818ee387-cf73-45bc-8925-c234d5fd8ee3\" (UID: \"818ee387-cf73-45bc-8925-c234d5fd8ee3\") " Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.778999 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs" (OuterVolumeSpecName: "logs") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.785577 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4" (OuterVolumeSpecName: "kube-api-access-5xlk4") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "kube-api-access-5xlk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.785668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.786820 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "818ee387-cf73-45bc-8925-c234d5fd8ee3" (UID: "818ee387-cf73-45bc-8925-c234d5fd8ee3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.787428 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.787512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8" (OuterVolumeSpecName: "kube-api-access-f5mg8") pod "818ee387-cf73-45bc-8925-c234d5fd8ee3" (UID: "818ee387-cf73-45bc-8925-c234d5fd8ee3"). InnerVolumeSpecName "kube-api-access-f5mg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.788094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl" (OuterVolumeSpecName: "kube-api-access-l85sl") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "kube-api-access-l85sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.791227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts" (OuterVolumeSpecName: "scripts") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.810619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts" (OuterVolumeSpecName: "scripts") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.835356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.835562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "818ee387-cf73-45bc-8925-c234d5fd8ee3" (UID: "818ee387-cf73-45bc-8925-c234d5fd8ee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.835975 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.836853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data" (OuterVolumeSpecName: "config-data") pod "2a5f9f47-1ba0-4d37-8597-874a62d9045e" (UID: "2a5f9f47-1ba0-4d37-8597-874a62d9045e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.840131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data" (OuterVolumeSpecName: "config-data") pod "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" (UID: "f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.880901 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlk4\" (UniqueName: \"kubernetes.io/projected/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-kube-api-access-5xlk4\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.880990 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a5f9f47-1ba0-4d37-8597-874a62d9045e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881027 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881039 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881048 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881057 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881066 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881077 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l85sl\" (UniqueName: \"kubernetes.io/projected/2a5f9f47-1ba0-4d37-8597-874a62d9045e-kube-api-access-l85sl\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881106 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881118 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881127 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881135 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f9f47-1ba0-4d37-8597-874a62d9045e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881145 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/818ee387-cf73-45bc-8925-c234d5fd8ee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:06 crc kubenswrapper[4869]: I0202 14:53:06.881156 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5mg8\" (UniqueName: \"kubernetes.io/projected/818ee387-cf73-45bc-8925-c234d5fd8ee3-kube-api-access-f5mg8\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.114002 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:07 crc kubenswrapper[4869]: W0202 14:53:07.124378 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47cb4795_faf4_4845_8f4c_3675b5613437.slice/crio-0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c WatchSource:0}: Error finding container 0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c: Status 404 returned error can't find the container with id 0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.507361 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:07 crc kubenswrapper[4869]: W0202 14:53:07.534224 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb918eb2a_3cab_422f_ba7d_f06c4ec21ef4.slice/crio-120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791 WatchSource:0}: Error finding container 120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791: Status 404 returned error can't find the container with id 120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791 Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.641497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerStarted","Data":"120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.643683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645595 4869 generic.go:334] "Generic (PLEG): container finished" podID="47cb4795-faf4-4845-8f4c-3675b5613437" containerID="419d84c102f4f60e2c9ce52715ebe01d27cf44677cf9646b669ee52aa5fb04bc" exitCode=0 Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645685 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4fqzr" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645697 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q447q" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerDied","Data":"419d84c102f4f60e2c9ce52715ebe01d27cf44677cf9646b669ee52aa5fb04bc"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645778 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerStarted","Data":"0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c"} Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.645807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zxtsl" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785202 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:07 crc kubenswrapper[4869]: E0202 14:53:07.785720 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerName="placement-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerName="placement-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: E0202 14:53:07.785756 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerName="keystone-bootstrap" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785764 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerName="keystone-bootstrap" Feb 02 14:53:07 crc kubenswrapper[4869]: E0202 14:53:07.785785 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerName="barbican-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.785794 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerName="barbican-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.786083 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" containerName="barbican-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.786102 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" containerName="keystone-bootstrap" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.786110 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" containerName="placement-db-sync" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.787212 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.793641 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pg4t9" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794148 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794474 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794538 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.794773 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.797736 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.901760 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-575599577-dmndq"] Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.903396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910421 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910477 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910541 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.910603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-72872" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911335 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911564 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.911718 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.925084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.925348 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 14:53:07 crc kubenswrapper[4869]: I0202 14:53:07.934995 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-575599577-dmndq"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012213 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-combined-ca-bundle\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012735 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-config-data\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012784 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-scripts\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jzc\" (UniqueName: \"kubernetes.io/projected/fc4c6770-5954-4777-8c4f-47397d045008-kube-api-access-h8jzc\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.012999 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-fernet-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013042 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-credential-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-internal-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013191 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-public-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.013291 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.018281 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.033734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.039800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.051590 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.054689 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.057236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.057694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.058514 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.058746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"placement-79c776b57b-76pd5\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.070449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2d6ss" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.070775 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.070971 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.075045 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.114759 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-config-data\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-scripts\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jzc\" (UniqueName: \"kubernetes.io/projected/fc4c6770-5954-4777-8c4f-47397d045008-kube-api-access-h8jzc\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-fernet-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121445 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-credential-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-internal-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-public-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.121695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-combined-ca-bundle\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.132231 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.134208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.148895 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.149714 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-combined-ca-bundle\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.150478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-public-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.198826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-config-data\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.199314 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-internal-tls-certs\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.199748 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-fernet-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.201403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-credential-keys\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.202008 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc4c6770-5954-4777-8c4f-47397d045008-scripts\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.209722 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jzc\" (UniqueName: \"kubernetes.io/projected/fc4c6770-5954-4777-8c4f-47397d045008-kube-api-access-h8jzc\") pod \"keystone-575599577-dmndq\" (UID: \"fc4c6770-5954-4777-8c4f-47397d045008\") " pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.224412 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.229877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.229947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.229980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230133 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230158 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230216 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.230287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.286761 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.295123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.369889 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370393 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.370854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.371130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.371899 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.371977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.372012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.372047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.379053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.386032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.387821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.387934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.390229 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.401413 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.401719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.410394 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.410509 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.421480 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.432031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.435831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"barbican-worker-c9668db5f-6b8rj\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.450626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"barbican-keystone-listener-654bc95f8d-8hcrz\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.488392 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d7f6679db-zbdxv"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.490640 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.507042 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-675f9657dc-6qw7m"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.529673 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d7f6679db-zbdxv"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.529800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.545013 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-675f9657dc-6qw7m"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.548524 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.562006 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.563817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.567436 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.584765 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585109 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-logs\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585288 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data-custom\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585562 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-combined-ca-bundle\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.585987 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnt9\" (UniqueName: \"kubernetes.io/projected/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-kube-api-access-kbnt9\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.590478 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data-custom\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693898 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data-custom\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.693976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxk7g\" (UniqueName: \"kubernetes.io/projected/18463ac0-a171-4ae0-9201-8df3d574eb70-kube-api-access-dxk7g\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694165 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-combined-ca-bundle\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694348 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-combined-ca-bundle\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694586 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnt9\" (UniqueName: \"kubernetes.io/projected/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-kube-api-access-kbnt9\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-logs\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.694891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18463ac0-a171-4ae0-9201-8df3d574eb70-logs\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.697983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.702472 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-logs\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.703154 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.705272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.707415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.709321 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.714222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.720843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data-custom\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.727445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerStarted","Data":"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d"} Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.727535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerStarted","Data":"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a"} Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.729601 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.737778 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-config-data\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.738784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-combined-ca-bundle\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.745301 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"dnsmasq-dns-869f779d85-ttvch\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.749172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerStarted","Data":"571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846"} Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.749373 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" containerID="cri-o://571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846" gracePeriod=10 Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.749753 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.772212 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnt9\" (UniqueName: \"kubernetes.io/projected/9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3-kube-api-access-kbnt9\") pod \"barbican-keystone-listener-5d7f6679db-zbdxv\" (UID: \"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3\") " pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.774439 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bb87b4954-l5h9p" podStartSLOduration=7.774414107 podStartE2EDuration="7.774414107s" podCreationTimestamp="2026-02-02 14:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:08.766789498 +0000 UTC m=+1190.411426278" watchObservedRunningTime="2026-02-02 14:53:08.774414107 +0000 UTC m=+1190.419050877" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.799862 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18463ac0-a171-4ae0-9201-8df3d574eb70-logs\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.800058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data-custom\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.800150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.800207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxk7g\" (UniqueName: \"kubernetes.io/projected/18463ac0-a171-4ae0-9201-8df3d574eb70-kube-api-access-dxk7g\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-combined-ca-bundle\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802447 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.802496 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.803368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18463ac0-a171-4ae0-9201-8df3d574eb70-logs\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.804607 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" podStartSLOduration=7.804577233 podStartE2EDuration="7.804577233s" podCreationTimestamp="2026-02-02 14:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:08.795680733 +0000 UTC m=+1190.440317503" watchObservedRunningTime="2026-02-02 14:53:08.804577233 +0000 UTC m=+1190.449214003" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.805239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.833125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.834711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data-custom\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.835816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-combined-ca-bundle\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.839956 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.870508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.871415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18463ac0-a171-4ae0-9201-8df3d574eb70-config-data\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.872224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxk7g\" (UniqueName: \"kubernetes.io/projected/18463ac0-a171-4ae0-9201-8df3d574eb70-kube-api-access-dxk7g\") pod \"barbican-worker-675f9657dc-6qw7m\" (UID: \"18463ac0-a171-4ae0-9201-8df3d574eb70\") " pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.885335 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"barbican-api-59bd6db9d6-z6bh8\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.892490 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.928518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" Feb 02 14:53:08 crc kubenswrapper[4869]: I0202 14:53:08.948753 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-675f9657dc-6qw7m" Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.031414 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.120594 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.746311 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-575599577-dmndq"] Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.794624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerStarted","Data":"f71f18fd5c51bc2ff8e4203c7e7213ae442d57834261ba22fc6581334d9a1f73"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.839949 4869 generic.go:334] "Generic (PLEG): container finished" podID="47cb4795-faf4-4845-8f4c-3675b5613437" containerID="571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846" exitCode=0 Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.840042 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerDied","Data":"571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.853374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-575599577-dmndq" event={"ID":"fc4c6770-5954-4777-8c4f-47397d045008","Type":"ContainerStarted","Data":"cbbd11885d2dd89a0ee90b2accf8bc63a4b6150bcca43f03dd770a7c6cccf327"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.856701 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.866629 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerStarted","Data":"a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.866683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerStarted","Data":"45f00cd48b456ba32635e74b444d036ced51d5190a5131b65618e8664fdb1787"} Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.947632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.947762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.947980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.948013 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.948081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") pod \"47cb4795-faf4-4845-8f4c-3675b5613437\" (UID: \"47cb4795-faf4-4845-8f4c-3675b5613437\") " Feb 02 14:53:09 crc kubenswrapper[4869]: I0202 14:53:09.964256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9" (OuterVolumeSpecName: "kube-api-access-qvfx9") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "kube-api-access-qvfx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.008764 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.040707 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.051120 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvfx9\" (UniqueName: \"kubernetes.io/projected/47cb4795-faf4-4845-8f4c-3675b5613437-kube-api-access-qvfx9\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078029 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bbd64cf97-7t5h5"] Feb 02 14:53:10 crc kubenswrapper[4869]: E0202 14:53:10.078616 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078632 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" Feb 02 14:53:10 crc kubenswrapper[4869]: E0202 14:53:10.078650 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="init" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078658 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="init" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.078980 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" containerName="dnsmasq-dns" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.081538 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.082428 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3a4838_a42e_4ff4_a4b2_7dd079089a42.slice/crio-eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb WatchSource:0}: Error finding container eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb: Status 404 returned error can't find the container with id eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.088417 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bbd64cf97-7t5h5"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.144164 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.205959 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1dcc76_d41e_4492_95d0_dcbb0b1254b4.slice/crio-3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c WatchSource:0}: Error finding container 3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c: Status 404 returned error can't find the container with id 3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.217128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.226703 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ad3cba7_fb7e_43f6_b818_4b2c392590e0.slice/crio-e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000 WatchSource:0}: Error finding container e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000: Status 404 returned error can't find the container with id e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000 Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.256918 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-ovndb-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-combined-ca-bundle\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-internal-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-public-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257244 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-httpd-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.257423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4g2\" (UniqueName: \"kubernetes.io/projected/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-kube-api-access-xz4g2\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.272566 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d7f6679db-zbdxv"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.280442 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.299061 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eddd0ab_42d6_4db0_b0db_eeb0259f4ec3.slice/crio-95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd WatchSource:0}: Error finding container 95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd: Status 404 returned error can't find the container with id 95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.363747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-httpd-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.363957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4g2\" (UniqueName: \"kubernetes.io/projected/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-kube-api-access-xz4g2\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364044 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-ovndb-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-combined-ca-bundle\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-internal-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-public-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.364612 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.369512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.371419 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.374138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-combined-ca-bundle\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.391495 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-ovndb-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.399071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4g2\" (UniqueName: \"kubernetes.io/projected/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-kube-api-access-xz4g2\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.401299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-httpd-config\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.402083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-internal-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.403965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca-public-tls-certs\") pod \"neutron-5bbd64cf97-7t5h5\" (UID: \"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca\") " pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.420510 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config" (OuterVolumeSpecName: "config") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.429463 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-675f9657dc-6qw7m"] Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.429880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47cb4795-faf4-4845-8f4c-3675b5613437" (UID: "47cb4795-faf4-4845-8f4c-3675b5613437"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.433727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.468082 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.468374 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.468478 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47cb4795-faf4-4845-8f4c-3675b5613437-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.472726 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:10 crc kubenswrapper[4869]: W0202 14:53:10.473363 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c561af1_f926_4ced_9d2e_05778fed8a44.slice/crio-19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18 WatchSource:0}: Error finding container 19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18: Status 404 returned error can't find the container with id 19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18 Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.898430 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerStarted","Data":"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.918844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" event={"ID":"47cb4795-faf4-4845-8f4c-3675b5613437","Type":"ContainerDied","Data":"0724324a44ea7c5f22202c36df3f869cddc0eeea9fed4095821a2002e015fd3c"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.918930 4869 scope.go:117] "RemoveContainer" containerID="571d34c74b189c8408eaf89d45eed19f0f5b687c154c47f5694988f74cb33846" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.919169 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fbxcp" Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.949719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" event={"ID":"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3","Type":"ContainerStarted","Data":"95fd24a4f0ef849e7c4f75feb035426268f37142c35c1d820c4bcc2e259e4dfd"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.955250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerStarted","Data":"3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.978707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-575599577-dmndq" event={"ID":"fc4c6770-5954-4777-8c4f-47397d045008","Type":"ContainerStarted","Data":"2f8f9684b1886cc82b30b6226705d756eea2f05b32d706f5455a6bb4ff96e63e"} Feb 02 14:53:10 crc kubenswrapper[4869]: I0202 14:53:10.980405 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.009119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerStarted","Data":"b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.009563 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070755 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070842 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerStarted","Data":"eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070877 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fbxcp"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerStarted","Data":"19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-675f9657dc-6qw7m" event={"ID":"18463ac0-a171-4ae0-9201-8df3d574eb70","Type":"ContainerStarted","Data":"4dd953267fa3787e6996b19cbf74956668a1fe03d2b2c1bab19ac6f07f3d8493"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.070968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerStarted","Data":"e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000"} Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.073359 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-575599577-dmndq" podStartSLOduration=4.073337828 podStartE2EDuration="4.073337828s" podCreationTimestamp="2026-02-02 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:11.022010587 +0000 UTC m=+1192.666647357" watchObservedRunningTime="2026-02-02 14:53:11.073337828 +0000 UTC m=+1192.717974598" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.086663 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c4d7559c7-79dhq" podStartSLOduration=7.086638257 podStartE2EDuration="7.086638257s" podCreationTimestamp="2026-02-02 14:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:11.068517158 +0000 UTC m=+1192.713153928" watchObservedRunningTime="2026-02-02 14:53:11.086638257 +0000 UTC m=+1192.731275017" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.103249 4869 scope.go:117] "RemoveContainer" containerID="419d84c102f4f60e2c9ce52715ebe01d27cf44677cf9646b669ee52aa5fb04bc" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.272996 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bbd64cf97-7t5h5"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.284736 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc5588748-k6f99"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.287025 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.296690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc5588748-k6f99"] Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-config-data\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422082 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-scripts\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvptk\" (UniqueName: \"kubernetes.io/projected/ec674145-26a6-4ce9-9e00-083bccdad283-kube-api-access-cvptk\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-internal-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422217 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-public-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec674145-26a6-4ce9-9e00-083bccdad283-logs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.422293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-combined-ca-bundle\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.488038 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cb4795-faf4-4845-8f4c-3675b5613437" path="/var/lib/kubelet/pods/47cb4795-faf4-4845-8f4c-3675b5613437/volumes" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-config-data\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-scripts\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvptk\" (UniqueName: \"kubernetes.io/projected/ec674145-26a6-4ce9-9e00-083bccdad283-kube-api-access-cvptk\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-internal-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.525955 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-public-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.526013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec674145-26a6-4ce9-9e00-083bccdad283-logs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.526069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-combined-ca-bundle\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.529068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec674145-26a6-4ce9-9e00-083bccdad283-logs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.538901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-scripts\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.539085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-combined-ca-bundle\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.539263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-public-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.541630 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-config-data\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.543364 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec674145-26a6-4ce9-9e00-083bccdad283-internal-tls-certs\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.567654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvptk\" (UniqueName: \"kubernetes.io/projected/ec674145-26a6-4ce9-9e00-083bccdad283-kube-api-access-cvptk\") pod \"placement-dc5588748-k6f99\" (UID: \"ec674145-26a6-4ce9-9e00-083bccdad283\") " pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:11 crc kubenswrapper[4869]: I0202 14:53:11.660331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.083748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bbd64cf97-7t5h5" event={"ID":"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca","Type":"ContainerStarted","Data":"4b104cee6894c28ad44308cd6cf2d5f59a2244071ec6b719e2459022cf1481e0"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.087455 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerID="9d24ac1d4cb800028d8b0cae08d3371a0141fabf6b8ee870243781d99e8bd219" exitCode=0 Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.087540 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerDied","Data":"9d24ac1d4cb800028d8b0cae08d3371a0141fabf6b8ee870243781d99e8bd219"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.090348 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerStarted","Data":"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.090534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.091145 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.093752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerStarted","Data":"30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8"} Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.094005 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb87b4954-l5h9p" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" containerID="cri-o://c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" gracePeriod=30 Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.094183 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bb87b4954-l5h9p" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" containerID="cri-o://5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" gracePeriod=30 Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.154630 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79c776b57b-76pd5" podStartSLOduration=5.154609505 podStartE2EDuration="5.154609505s" podCreationTimestamp="2026-02-02 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:12.144393102 +0000 UTC m=+1193.789029882" watchObservedRunningTime="2026-02-02 14:53:12.154609505 +0000 UTC m=+1193.799246275" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.319461 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc5588748-k6f99"] Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.729854 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77794c6b74-fhtds"] Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.732856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.742593 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.745069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.802382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77794c6b74-fhtds"] Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxmlt\" (UniqueName: \"kubernetes.io/projected/bbb63205-2a5c-4177-8b7f-2a141324ba49-kube-api-access-kxmlt\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-public-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-internal-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904562 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-combined-ca-bundle\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb63205-2a5c-4177-8b7f-2a141324ba49-logs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:12 crc kubenswrapper[4869]: I0202 14:53:12.904624 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data-custom\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.006846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-combined-ca-bundle\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb63205-2a5c-4177-8b7f-2a141324ba49-logs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data-custom\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxmlt\" (UniqueName: \"kubernetes.io/projected/bbb63205-2a5c-4177-8b7f-2a141324ba49-kube-api-access-kxmlt\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007626 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-public-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.007772 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-internal-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.013884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbb63205-2a5c-4177-8b7f-2a141324ba49-logs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.015542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-public-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.015812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data-custom\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.016242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-config-data\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.016836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-combined-ca-bundle\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.017101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbb63205-2a5c-4177-8b7f-2a141324ba49-internal-tls-certs\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.030231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxmlt\" (UniqueName: \"kubernetes.io/projected/bbb63205-2a5c-4177-8b7f-2a141324ba49-kube-api-access-kxmlt\") pod \"barbican-api-77794c6b74-fhtds\" (UID: \"bbb63205-2a5c-4177-8b7f-2a141324ba49\") " pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.056430 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.114395 4869 generic.go:334] "Generic (PLEG): container finished" podID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" exitCode=0 Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.114582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerDied","Data":"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.117387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerStarted","Data":"a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.118064 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.118129 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.119662 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc5588748-k6f99" event={"ID":"ec674145-26a6-4ce9-9e00-083bccdad283","Type":"ContainerStarted","Data":"44cccd8d0b052082992b6a91275c9579e26c2c63f40ad77c48f4d7adc5b83993"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.121230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bbd64cf97-7t5h5" event={"ID":"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca","Type":"ContainerStarted","Data":"d016fa5ec7bdf0f7d1b45785f283fafd1908584e89557ab383231269829371d5"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.129874 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerStarted","Data":"639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55"} Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.129948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.168443 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59bd6db9d6-z6bh8" podStartSLOduration=5.168414212 podStartE2EDuration="5.168414212s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:13.144593202 +0000 UTC m=+1194.789229972" watchObservedRunningTime="2026-02-02 14:53:13.168414212 +0000 UTC m=+1194.813050982" Feb 02 14:53:13 crc kubenswrapper[4869]: I0202 14:53:13.199281 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-ttvch" podStartSLOduration=5.199255146 podStartE2EDuration="5.199255146s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:13.176980884 +0000 UTC m=+1194.821617654" watchObservedRunningTime="2026-02-02 14:53:13.199255146 +0000 UTC m=+1194.843891916" Feb 02 14:53:14 crc kubenswrapper[4869]: I0202 14:53:14.149016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc5588748-k6f99" event={"ID":"ec674145-26a6-4ce9-9e00-083bccdad283","Type":"ContainerStarted","Data":"83a4d14bcbb12c200c324e8e3f81b3b7ed84ad9c08a61b317cc43995548b52c0"} Feb 02 14:53:14 crc kubenswrapper[4869]: I0202 14:53:14.693248 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77794c6b74-fhtds"] Feb 02 14:53:14 crc kubenswrapper[4869]: W0202 14:53:14.712652 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbb63205_2a5c_4177_8b7f_2a141324ba49.slice/crio-4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609 WatchSource:0}: Error finding container 4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609: Status 404 returned error can't find the container with id 4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609 Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.251949 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-675f9657dc-6qw7m" event={"ID":"18463ac0-a171-4ae0-9201-8df3d574eb70","Type":"ContainerStarted","Data":"49aa13495ff012785f6cbad25793c330a84cac85dd60f37679961c9284263028"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.255532 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerStarted","Data":"8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.258607 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerStarted","Data":"3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.264492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" event={"ID":"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3","Type":"ContainerStarted","Data":"9abd5fe1fa5ac24cf4114633dce2bf05ae28693402cee1f3e9d851b59359b889"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.266517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc5588748-k6f99" event={"ID":"ec674145-26a6-4ce9-9e00-083bccdad283","Type":"ContainerStarted","Data":"6fdef382ff95dd8ee1fd435776d623c3d9b832e9ad25c82012575a87654ba18d"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.267269 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.267346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.280595 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bbd64cf97-7t5h5" event={"ID":"1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca","Type":"ContainerStarted","Data":"2ee5470b5b8e5d5ff05e0d6e6d1c5495f32906d17a86a858aad17186fb901bbc"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.283142 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.286101 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" podStartSLOduration=3.224019918 podStartE2EDuration="7.286076566s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.167684898 +0000 UTC m=+1191.812321668" lastFinishedPulling="2026-02-02 14:53:14.229741546 +0000 UTC m=+1195.874378316" observedRunningTime="2026-02-02 14:53:15.28219126 +0000 UTC m=+1196.926828020" watchObservedRunningTime="2026-02-02 14:53:15.286076566 +0000 UTC m=+1196.930713336" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.293884 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77794c6b74-fhtds" event={"ID":"bbb63205-2a5c-4177-8b7f-2a141324ba49","Type":"ContainerStarted","Data":"a7c45f780b4e93b2590a48689aea4853fdbf85cfa83b87ebb46b7331ac84ed9e"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.293973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77794c6b74-fhtds" event={"ID":"bbb63205-2a5c-4177-8b7f-2a141324ba49","Type":"ContainerStarted","Data":"4e070d4ea007a9a6c71eeb6c58e5ec5ab20834ed2bd179e24720cb52fb519609"} Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.304868 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.305086 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.305374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.306607 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.306750 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5" gracePeriod=600 Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.336073 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc5588748-k6f99" podStartSLOduration=4.336043042 podStartE2EDuration="4.336043042s" podCreationTimestamp="2026-02-02 14:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:15.332587147 +0000 UTC m=+1196.977223917" watchObservedRunningTime="2026-02-02 14:53:15.336043042 +0000 UTC m=+1196.980679812" Feb 02 14:53:15 crc kubenswrapper[4869]: I0202 14:53:15.380879 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bbd64cf97-7t5h5" podStartSLOduration=5.380849592 podStartE2EDuration="5.380849592s" podCreationTimestamp="2026-02-02 14:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:15.361520173 +0000 UTC m=+1197.006156943" watchObservedRunningTime="2026-02-02 14:53:15.380849592 +0000 UTC m=+1197.025486362" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.307508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77794c6b74-fhtds" event={"ID":"bbb63205-2a5c-4177-8b7f-2a141324ba49","Type":"ContainerStarted","Data":"72b5e3e6869b43d44736a0a14489b839e5de3b97ac12618669703cb23d6c1f8b"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.309634 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.309670 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.311416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-675f9657dc-6qw7m" event={"ID":"18463ac0-a171-4ae0-9201-8df3d574eb70","Type":"ContainerStarted","Data":"e78201b2a276911e29e1b21ed47e7bb4f8fa0dfbac6e45a8ff947dc12f3a9c53"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.314023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerStarted","Data":"4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.316773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerStarted","Data":"0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321455 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5" exitCode=0 Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.321708 4869 scope.go:117] "RemoveContainer" containerID="132088891d387f31e6f33bf321a046d8d47bc47917e608beae0ff723f099aa56" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.332528 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerStarted","Data":"6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.335216 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" event={"ID":"9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3","Type":"ContainerStarted","Data":"c94b38428dfa7121ceddf733bc0447aacfd91627945553c335ebcd8fe2f0710b"} Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.401301 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77794c6b74-fhtds" podStartSLOduration=4.401260743 podStartE2EDuration="4.401260743s" podCreationTimestamp="2026-02-02 14:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:16.355258784 +0000 UTC m=+1197.999895554" watchObservedRunningTime="2026-02-02 14:53:16.401260743 +0000 UTC m=+1198.045897513" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.427831 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d7f6679db-zbdxv" podStartSLOduration=4.502266862 podStartE2EDuration="8.427798s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.302036054 +0000 UTC m=+1191.946672824" lastFinishedPulling="2026-02-02 14:53:14.227567182 +0000 UTC m=+1195.872203962" observedRunningTime="2026-02-02 14:53:16.422387175 +0000 UTC m=+1198.067023955" watchObservedRunningTime="2026-02-02 14:53:16.427798 +0000 UTC m=+1198.072434770" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.546550 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-s2dwg" podStartSLOduration=4.402591351 podStartE2EDuration="46.546520409s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:31.943471194 +0000 UTC m=+1153.588107964" lastFinishedPulling="2026-02-02 14:53:14.087400252 +0000 UTC m=+1195.732037022" observedRunningTime="2026-02-02 14:53:16.461179196 +0000 UTC m=+1198.105815966" watchObservedRunningTime="2026-02-02 14:53:16.546520409 +0000 UTC m=+1198.191157179" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.582141 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.583601 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-675f9657dc-6qw7m" podStartSLOduration=4.850293007 podStartE2EDuration="8.583577316s" podCreationTimestamp="2026-02-02 14:53:08 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.494397576 +0000 UTC m=+1192.139034346" lastFinishedPulling="2026-02-02 14:53:14.227681885 +0000 UTC m=+1195.872318655" observedRunningTime="2026-02-02 14:53:16.483689073 +0000 UTC m=+1198.128325843" watchObservedRunningTime="2026-02-02 14:53:16.583577316 +0000 UTC m=+1198.228214086" Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.603973 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:16 crc kubenswrapper[4869]: I0202 14:53:16.620569 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c9668db5f-6b8rj" podStartSLOduration=5.630583208 podStartE2EDuration="9.620536871s" podCreationTimestamp="2026-02-02 14:53:07 +0000 UTC" firstStartedPulling="2026-02-02 14:53:10.238396618 +0000 UTC m=+1191.883033388" lastFinishedPulling="2026-02-02 14:53:14.228350281 +0000 UTC m=+1195.872987051" observedRunningTime="2026-02-02 14:53:16.534949142 +0000 UTC m=+1198.179585932" watchObservedRunningTime="2026-02-02 14:53:16.620536871 +0000 UTC m=+1198.265173641" Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387662 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c9668db5f-6b8rj" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" containerID="cri-o://8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387719 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" containerID="cri-o://3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387720 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c9668db5f-6b8rj" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" containerID="cri-o://4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.387847 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" containerID="cri-o://6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9" gracePeriod=30 Feb 02 14:53:18 crc kubenswrapper[4869]: I0202 14:53:18.896186 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.094172 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.095030 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" containerID="cri-o://a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974" gracePeriod=10 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.426587 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerID="3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38" exitCode=143 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.426706 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerDied","Data":"3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38"} Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444213 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerID="4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea" exitCode=0 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444263 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerID="8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8" exitCode=143 Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerDied","Data":"4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea"} Feb 02 14:53:19 crc kubenswrapper[4869]: I0202 14:53:19.444330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerDied","Data":"8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8"} Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.491984 4869 generic.go:334] "Generic (PLEG): container finished" podID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerID="a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974" exitCode=0 Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.492104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerDied","Data":"a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974"} Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.504652 4869 generic.go:334] "Generic (PLEG): container finished" podID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerID="6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9" exitCode=0 Feb 02 14:53:20 crc kubenswrapper[4869]: I0202 14:53:20.504710 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerDied","Data":"6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9"} Feb 02 14:53:21 crc kubenswrapper[4869]: I0202 14:53:21.162632 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:21 crc kubenswrapper[4869]: I0202 14:53:21.519502 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Feb 02 14:53:21 crc kubenswrapper[4869]: I0202 14:53:21.817627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:23 crc kubenswrapper[4869]: I0202 14:53:23.563039 4869 generic.go:334] "Generic (PLEG): container finished" podID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerID="0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a" exitCode=0 Feb 02 14:53:23 crc kubenswrapper[4869]: I0202 14:53:23.563121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerDied","Data":"0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a"} Feb 02 14:53:24 crc kubenswrapper[4869]: I0202 14:53:24.901172 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:24 crc kubenswrapper[4869]: I0202 14:53:24.979497 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77794c6b74-fhtds" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.062104 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.062401 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59bd6db9d6-z6bh8" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" containerID="cri-o://30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8" gracePeriod=30 Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.062816 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59bd6db9d6-z6bh8" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" containerID="cri-o://a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287" gracePeriod=30 Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.611275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" event={"ID":"09d16c44-bf33-426a-ae17-9ec52f7c4bdf","Type":"ContainerDied","Data":"9d20104835b08533de4169d71a96c0b24b6f27636df1686a4f2724353347f5f4"} Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.611799 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d20104835b08533de4169d71a96c0b24b6f27636df1686a4f2724353347f5f4" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.619293 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s2dwg" event={"ID":"f0e63b99-6d06-44ea-a061-b9f79551126a","Type":"ContainerDied","Data":"86f6ff04cbc086ccbfd2e84539b1d96a49f77aa4c0aa0c0898599df70d3ebe0a"} Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.619354 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f6ff04cbc086ccbfd2e84539b1d96a49f77aa4c0aa0c0898599df70d3ebe0a" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.623097 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerID="30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8" exitCode=143 Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.624282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerDied","Data":"30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8"} Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.628525 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.641451 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705579 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.705924 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706030 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706401 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") pod \"f0e63b99-6d06-44ea-a061-b9f79551126a\" (UID: \"f0e63b99-6d06-44ea-a061-b9f79551126a\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.706457 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") pod \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\" (UID: \"09d16c44-bf33-426a-ae17-9ec52f7c4bdf\") " Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.710966 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.734326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts" (OuterVolumeSpecName: "scripts") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.734517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw" (OuterVolumeSpecName: "kube-api-access-l9jzw") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "kube-api-access-l9jzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.744443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.744735 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv" (OuterVolumeSpecName: "kube-api-access-n8krv") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "kube-api-access-n8krv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813738 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e63b99-6d06-44ea-a061-b9f79551126a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813780 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9jzw\" (UniqueName: \"kubernetes.io/projected/f0e63b99-6d06-44ea-a061-b9f79551126a-kube-api-access-l9jzw\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813796 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8krv\" (UniqueName: \"kubernetes.io/projected/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-kube-api-access-n8krv\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813815 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.813828 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.847890 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.853741 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data" (OuterVolumeSpecName: "config-data") pod "f0e63b99-6d06-44ea-a061-b9f79551126a" (UID: "f0e63b99-6d06-44ea-a061-b9f79551126a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.880552 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config" (OuterVolumeSpecName: "config") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.882227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.888441 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.894331 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09d16c44-bf33-426a-ae17-9ec52f7c4bdf" (UID: "09d16c44-bf33-426a-ae17-9ec52f7c4bdf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.916957 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917000 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917010 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917020 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917030 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09d16c44-bf33-426a-ae17-9ec52f7c4bdf-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:25 crc kubenswrapper[4869]: I0202 14:53:25.917042 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e63b99-6d06-44ea-a061-b9f79551126a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.292205 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.328871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329092 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.329290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") pod \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\" (UID: \"4ad3cba7-fb7e-43f6-b818-4b2c392590e0\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.334874 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs" (OuterVolumeSpecName: "logs") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.338074 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg" (OuterVolumeSpecName: "kube-api-access-5tcqg") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "kube-api-access-5tcqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.344365 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.361867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.383430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data" (OuterVolumeSpecName: "config-data") pod "4ad3cba7-fb7e-43f6-b818-4b2c392590e0" (UID: "4ad3cba7-fb7e-43f6-b818-4b2c392590e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431480 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431540 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431557 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431568 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.431582 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tcqg\" (UniqueName: \"kubernetes.io/projected/4ad3cba7-fb7e-43f6-b818-4b2c392590e0-kube-api-access-5tcqg\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9668db5f-6b8rj" event={"ID":"4ad3cba7-fb7e-43f6-b818-4b2c392590e0","Type":"ContainerDied","Data":"e30d85709f7fba68928f655449b385355b25fa3924b114dd08365048b85d9000"} Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638811 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9668db5f-6b8rj" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s2dwg" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638831 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-bzm58" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.638847 4869 scope.go:117] "RemoveContainer" containerID="4cdc1c6ec9136e063c5dbc868aedd5caf6520324f4538e7b850f8f01727547ea" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.746724 4869 scope.go:117] "RemoveContainer" containerID="8302c35ccd009fd1685ef993ca56993027ff1b85bf2a02821a036f9ad6cda0a8" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.781212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.804324 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.814399 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-bzm58"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.835941 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.840849 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs" (OuterVolumeSpecName: "logs") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.841010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.841164 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") pod \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\" (UID: \"2b3a4838-a42e-4ff4-a4b2-7dd079089a42\") " Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.843054 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.847848 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-c9668db5f-6b8rj"] Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.855083 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.855229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq" (OuterVolumeSpecName: "kube-api-access-j2njq") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "kube-api-access-j2njq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.896952 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.945228 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.945284 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.945302 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2njq\" (UniqueName: \"kubernetes.io/projected/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-kube-api-access-j2njq\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:26 crc kubenswrapper[4869]: I0202 14:53:26.950622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data" (OuterVolumeSpecName: "config-data") pod "2b3a4838-a42e-4ff4-a4b2-7dd079089a42" (UID: "2b3a4838-a42e-4ff4-a4b2-7dd079089a42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.041821 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041845 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.041865 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerName="cinder-db-sync" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041875 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerName="cinder-db-sync" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.041892 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="init" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.041901 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="init" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042006 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042019 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042031 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042039 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042060 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042070 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" Feb 02 14:53:27 crc kubenswrapper[4869]: E0202 14:53:27.042081 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042090 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042349 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042406 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042434 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" containerName="dnsmasq-dns" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042449 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" containerName="barbican-keystone-listener" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042463 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" containerName="cinder-db-sync" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.042614 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" containerName="barbican-worker-log" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.043812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.047449 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b3a4838-a42e-4ff4-a4b2-7dd079089a42-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-92dp9" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050678 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050892 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.050950 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.079158 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.155747 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.155835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.157721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.158088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.158347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.158411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.193839 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.200195 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.232462 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261254 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261284 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261325 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261407 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261565 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.261723 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.265925 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.265936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.266984 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.267071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.282128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"cinder-scheduler-0\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.342377 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.344187 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.352781 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.358410 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363639 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363773 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363878 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.365544 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.365835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.363901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.366074 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.366161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.366288 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.367356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.370081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.374160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.393747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"dnsmasq-dns-58db5546cc-nntnx\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.467675 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470188 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.470745 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.471607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.472198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.474202 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.476418 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.476938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.477614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.478936 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d16c44-bf33-426a-ae17-9ec52f7c4bdf" path="/var/lib/kubelet/pods/09d16c44-bf33-426a-ae17-9ec52f7c4bdf/volumes" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.479695 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad3cba7-fb7e-43f6-b818-4b2c392590e0" path="/var/lib/kubelet/pods/4ad3cba7-fb7e-43f6-b818-4b2c392590e0/volumes" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.496560 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"cinder-api-0\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.540015 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.682169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.748521 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerStarted","Data":"40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75"} Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749624 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" containerID="cri-o://40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749887 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" containerID="cri-o://32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.749955 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" containerID="cri-o://905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.750826 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" containerID="cri-o://3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49" gracePeriod=30 Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.793634 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.609012135 podStartE2EDuration="57.793604435s" podCreationTimestamp="2026-02-02 14:52:30 +0000 UTC" firstStartedPulling="2026-02-02 14:52:32.269553226 +0000 UTC m=+1153.914189996" lastFinishedPulling="2026-02-02 14:53:26.454145526 +0000 UTC m=+1208.098782296" observedRunningTime="2026-02-02 14:53:27.780208933 +0000 UTC m=+1209.424845703" watchObservedRunningTime="2026-02-02 14:53:27.793604435 +0000 UTC m=+1209.438241205" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.826491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" event={"ID":"2b3a4838-a42e-4ff4-a4b2-7dd079089a42","Type":"ContainerDied","Data":"eebefb75b3b56729a4db1dad88f87be9598306e135df97f90883a566d4e15fcb"} Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.826560 4869 scope.go:117] "RemoveContainer" containerID="6cdb4ca6e6dd88edf4c8de7c32a12fb9e104b1dd81d36865840668ebd6d84df9" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.826812 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-654bc95f8d-8hcrz" Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.906844 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.944264 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-654bc95f8d-8hcrz"] Feb 02 14:53:27 crc kubenswrapper[4869]: I0202 14:53:27.982183 4869 scope.go:117] "RemoveContainer" containerID="3400e423d40a54a6296a92e68d9e0c94bbc51102b5f07ba469e3ce29702bdf38" Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.058403 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.425598 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.517145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.847183 4869 generic.go:334] "Generic (PLEG): container finished" podID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerID="a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287" exitCode=0 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.847258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerDied","Data":"a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.853221 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerStarted","Data":"803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.869126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerStarted","Data":"3aa5c96598f9d84b8ea60ab2f8542911baacbe20302c3b591676275481c40de5"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883661 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75" exitCode=0 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883707 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb" exitCode=2 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883718 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49" exitCode=0 Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.883847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.897004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerStarted","Data":"f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7"} Feb 02 14:53:28 crc kubenswrapper[4869]: I0202 14:53:28.919479 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.027988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029498 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.029525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") pod \"9c561af1-f926-4ced-9d2e-05778fed8a44\" (UID: \"9c561af1-f926-4ced-9d2e-05778fed8a44\") " Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.031631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs" (OuterVolumeSpecName: "logs") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.033791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.036752 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj" (OuterVolumeSpecName: "kube-api-access-7qkxj") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "kube-api-access-7qkxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.058029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.088211 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data" (OuterVolumeSpecName: "config-data") pod "9c561af1-f926-4ced-9d2e-05778fed8a44" (UID: "9c561af1-f926-4ced-9d2e-05778fed8a44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141112 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c561af1-f926-4ced-9d2e-05778fed8a44-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141155 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qkxj\" (UniqueName: \"kubernetes.io/projected/9c561af1-f926-4ced-9d2e-05778fed8a44-kube-api-access-7qkxj\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141169 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141180 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.141190 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c561af1-f926-4ced-9d2e-05778fed8a44-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.410528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.482809 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3a4838-a42e-4ff4-a4b2-7dd079089a42" path="/var/lib/kubelet/pods/2b3a4838-a42e-4ff4-a4b2-7dd079089a42/volumes" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.912518 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerID="e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f" exitCode=0 Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.913214 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerDied","Data":"e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918110 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerID="905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121" exitCode=0 Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe3740ce-c24a-48b4-aab3-d1da5bf36089","Type":"ContainerDied","Data":"9a54c86921d5b0ef544bfd0a64a504e7bbbc4ab3d0006b551a598232317f2a2b"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.918263 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a54c86921d5b0ef544bfd0a64a504e7bbbc4ab3d0006b551a598232317f2a2b" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.925242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerStarted","Data":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.929804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59bd6db9d6-z6bh8" event={"ID":"9c561af1-f926-4ced-9d2e-05778fed8a44","Type":"ContainerDied","Data":"19168bf00636e82517104edb62ea76888bc20e0c4172a4adeba60255d42d7f18"} Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.929879 4869 scope.go:117] "RemoveContainer" containerID="a42f2e7a9320e6d8a4fa38df8f72ac30a420b6f33e6199fe9772af3ebb5ca287" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.929980 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59bd6db9d6-z6bh8" Feb 02 14:53:29 crc kubenswrapper[4869]: I0202 14:53:29.940918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerStarted","Data":"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5"} Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.042740 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.052013 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.059979 4869 scope.go:117] "RemoveContainer" containerID="30a2b5b0d841bb993dcba1509488d72a31ecef9af2615fd62467042d6cafd5e8" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.060600 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59bd6db9d6-z6bh8"] Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.176665 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178212 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178350 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178425 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.178593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") pod \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\" (UID: \"fe3740ce-c24a-48b4-aab3-d1da5bf36089\") " Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.179128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.179325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.184269 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5" (OuterVolumeSpecName: "kube-api-access-n68q5") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "kube-api-access-n68q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.187341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts" (OuterVolumeSpecName: "scripts") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.212233 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.262721 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281258 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281303 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281318 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281330 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281343 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe3740ce-c24a-48b4-aab3-d1da5bf36089-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.281356 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n68q5\" (UniqueName: \"kubernetes.io/projected/fe3740ce-c24a-48b4-aab3-d1da5bf36089-kube-api-access-n68q5\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.293602 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data" (OuterVolumeSpecName: "config-data") pod "fe3740ce-c24a-48b4-aab3-d1da5bf36089" (UID: "fe3740ce-c24a-48b4-aab3-d1da5bf36089"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.382522 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3740ce-c24a-48b4-aab3-d1da5bf36089-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.954419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerStarted","Data":"c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6"} Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.956390 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.962932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerStarted","Data":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.963144 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.963170 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" containerID="cri-o://c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" gracePeriod=30 Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.963189 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" containerID="cri-o://3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" gracePeriod=30 Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.971440 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:30 crc kubenswrapper[4869]: I0202 14:53:30.971438 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerStarted","Data":"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.003641 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" podStartSLOduration=4.003610559 podStartE2EDuration="4.003610559s" podCreationTimestamp="2026-02-02 14:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:30.976628891 +0000 UTC m=+1212.621265661" watchObservedRunningTime="2026-02-02 14:53:31.003610559 +0000 UTC m=+1212.648247329" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.006266 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.076521869 podStartE2EDuration="4.006251615s" podCreationTimestamp="2026-02-02 14:53:27 +0000 UTC" firstStartedPulling="2026-02-02 14:53:28.078419045 +0000 UTC m=+1209.723055825" lastFinishedPulling="2026-02-02 14:53:29.008148801 +0000 UTC m=+1210.652785571" observedRunningTime="2026-02-02 14:53:31.004171894 +0000 UTC m=+1212.648808664" watchObservedRunningTime="2026-02-02 14:53:31.006251615 +0000 UTC m=+1212.650888415" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.051473 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.051440933 podStartE2EDuration="4.051440933s" podCreationTimestamp="2026-02-02 14:53:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:31.036395851 +0000 UTC m=+1212.681032621" watchObservedRunningTime="2026-02-02 14:53:31.051440933 +0000 UTC m=+1212.696077703" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.063369 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.076840 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.088617 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089219 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089249 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089281 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089291 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089305 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089326 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089335 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089356 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089364 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" Feb 02 14:53:31 crc kubenswrapper[4869]: E0202 14:53:31.089374 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089381 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089595 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-central-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089632 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089650 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="sg-core" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089672 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="ceilometer-notification-agent" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089687 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" containerName="barbican-api-log" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.089706 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" containerName="proxy-httpd" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.091902 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.099342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.099481 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.101721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.101945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102335 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102419 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.102678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.119111 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.204776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.204978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205128 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.205207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.206042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.206934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.213042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.237823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.238834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.240252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.240695 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"ceilometer-0\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.481126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.494511 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c561af1-f926-4ced-9d2e-05778fed8a44" path="/var/lib/kubelet/pods/9c561af1-f926-4ced-9d2e-05778fed8a44/volumes" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.495831 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3740ce-c24a-48b4-aab3-d1da5bf36089" path="/var/lib/kubelet/pods/fe3740ce-c24a-48b4-aab3-d1da5bf36089/volumes" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.785462 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.945802 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.946869 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.946974 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") pod \"b6c7f465-f9c2-4384-9c28-18d85ff08928\" (UID: \"b6c7f465-f9c2-4384-9c28-18d85ff08928\") " Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs" (OuterVolumeSpecName: "logs") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.947684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.948142 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c7f465-f9c2-4384-9c28-18d85ff08928-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.948169 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c7f465-f9c2-4384-9c28-18d85ff08928-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.953461 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw" (OuterVolumeSpecName: "kube-api-access-cm2jw") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "kube-api-access-cm2jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.953572 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts" (OuterVolumeSpecName: "scripts") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.979300 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.981148 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.988750 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" exitCode=0 Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.988806 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" exitCode=143 Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.990533 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerDied","Data":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerDied","Data":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6c7f465-f9c2-4384-9c28-18d85ff08928","Type":"ContainerDied","Data":"f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7"} Feb 02 14:53:31 crc kubenswrapper[4869]: I0202 14:53:31.991483 4869 scope.go:117] "RemoveContainer" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.013734 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data" (OuterVolumeSpecName: "config-data") pod "b6c7f465-f9c2-4384-9c28-18d85ff08928" (UID: "b6c7f465-f9c2-4384-9c28-18d85ff08928"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.017402 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.031667 4869 scope.go:117] "RemoveContainer" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050079 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050127 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2jw\" (UniqueName: \"kubernetes.io/projected/b6c7f465-f9c2-4384-9c28-18d85ff08928-kube-api-access-cm2jw\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050141 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050151 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.050161 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c7f465-f9c2-4384-9c28-18d85ff08928-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.061939 4869 scope.go:117] "RemoveContainer" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.062625 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": container with ID starting with 3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e not found: ID does not exist" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.062683 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} err="failed to get container status \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": rpc error: code = NotFound desc = could not find container \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": container with ID starting with 3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.062720 4869 scope.go:117] "RemoveContainer" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.063099 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": container with ID starting with c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295 not found: ID does not exist" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063144 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} err="failed to get container status \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": rpc error: code = NotFound desc = could not find container \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": container with ID starting with c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295 not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063175 4869 scope.go:117] "RemoveContainer" containerID="3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063406 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e"} err="failed to get container status \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": rpc error: code = NotFound desc = could not find container \"3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e\": container with ID starting with 3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063437 4869 scope.go:117] "RemoveContainer" containerID="c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.063643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295"} err="failed to get container status \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": rpc error: code = NotFound desc = could not find container \"c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295\": container with ID starting with c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295 not found: ID does not exist" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.257739 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-bb87b4954-l5h9p" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.141:9696/\": dial tcp 10.217.0.141:9696: connect: connection refused" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.335697 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.355941 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.366734 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.367443 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367475 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" Feb 02 14:53:32 crc kubenswrapper[4869]: E0202 14:53:32.367529 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367795 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api-log" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.367834 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" containerName="cinder-api" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.369230 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.373017 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.373321 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.375390 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.377101 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.378577 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs27z\" (UniqueName: \"kubernetes.io/projected/1fbb1ee0-3403-49aa-9e5c-3926dd981751-kube-api-access-rs27z\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568261 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fbb1ee0-3403-49aa-9e5c-3926dd981751-logs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568391 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-scripts\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.568699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.569004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fbb1ee0-3403-49aa-9e5c-3926dd981751-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fbb1ee0-3403-49aa-9e5c-3926dd981751-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671383 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs27z\" (UniqueName: \"kubernetes.io/projected/1fbb1ee0-3403-49aa-9e5c-3926dd981751-kube-api-access-rs27z\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671419 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fbb1ee0-3403-49aa-9e5c-3926dd981751-logs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-scripts\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.671651 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.672152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fbb1ee0-3403-49aa-9e5c-3926dd981751-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.672787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fbb1ee0-3403-49aa-9e5c-3926dd981751-logs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.678479 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.678506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data-custom\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.679614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-scripts\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.679894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-config-data\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.683879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.685869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fbb1ee0-3403-49aa-9e5c-3926dd981751-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.696667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs27z\" (UniqueName: \"kubernetes.io/projected/1fbb1ee0-3403-49aa-9e5c-3926dd981751-kube-api-access-rs27z\") pod \"cinder-api-0\" (UID: \"1fbb1ee0-3403-49aa-9e5c-3926dd981751\") " pod="openstack/cinder-api-0" Feb 02 14:53:32 crc kubenswrapper[4869]: I0202 14:53:32.986529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 14:53:33 crc kubenswrapper[4869]: I0202 14:53:33.004967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"2a3a8afa5f4f39b9c1443825049b785119a54a533b4cf3c5d4655fb9914dd6f0"} Feb 02 14:53:33 crc kubenswrapper[4869]: I0202 14:53:33.444268 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 14:53:33 crc kubenswrapper[4869]: I0202 14:53:33.476143 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c7f465-f9c2-4384-9c28-18d85ff08928" path="/var/lib/kubelet/pods/b6c7f465-f9c2-4384-9c28-18d85ff08928/volumes" Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.016048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689"} Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.019067 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fbb1ee0-3403-49aa-9e5c-3926dd981751","Type":"ContainerStarted","Data":"ddea4a48b8633b4394cc12365b06bb9f9213034a3028ea7a9e898361896bc268"} Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.019120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fbb1ee0-3403-49aa-9e5c-3926dd981751","Type":"ContainerStarted","Data":"404436d8eaf31f75b99403a98828292902d7571560b26df1ebe76d9a5c3c9e59"} Feb 02 14:53:34 crc kubenswrapper[4869]: I0202 14:53:34.764842 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.033508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505"} Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.036608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1fbb1ee0-3403-49aa-9e5c-3926dd981751","Type":"ContainerStarted","Data":"ff3e0f7641de5392159f1e81cf81a107d01673a12900046b51f5863f5740bed3"} Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.036872 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 14:53:35 crc kubenswrapper[4869]: I0202 14:53:35.083318 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.083290193 podStartE2EDuration="3.083290193s" podCreationTimestamp="2026-02-02 14:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:35.060958301 +0000 UTC m=+1216.705595071" watchObservedRunningTime="2026-02-02 14:53:35.083290193 +0000 UTC m=+1216.727926963" Feb 02 14:53:36 crc kubenswrapper[4869]: I0202 14:53:36.049621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876"} Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.542480 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.624800 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.625141 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-ttvch" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" containerID="cri-o://639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55" gracePeriod=10 Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.679887 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 14:53:37 crc kubenswrapper[4869]: I0202 14:53:37.784128 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.086617 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerID="639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55" exitCode=0 Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.087286 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" containerID="cri-o://b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" gracePeriod=30 Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.087414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerDied","Data":"639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55"} Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.087722 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" containerID="cri-o://54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" gracePeriod=30 Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.219171 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419462 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419827 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419867 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.419941 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") pod \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\" (UID: \"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4\") " Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.431579 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp" (OuterVolumeSpecName: "kube-api-access-k2rmp") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "kube-api-access-k2rmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.480198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config" (OuterVolumeSpecName: "config") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.488419 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.506336 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.515161 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" (UID: "cc1dcc76-d41e-4492-95d0-dcbb0b1254b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522672 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522710 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522723 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522734 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2rmp\" (UniqueName: \"kubernetes.io/projected/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-kube-api-access-k2rmp\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:38 crc kubenswrapper[4869]: I0202 14:53:38.522752 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.110833 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" exitCode=0 Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.110984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerDied","Data":"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2"} Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.114652 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-ttvch" event={"ID":"cc1dcc76-d41e-4492-95d0-dcbb0b1254b4","Type":"ContainerDied","Data":"3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c"} Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.114791 4869 scope.go:117] "RemoveContainer" containerID="639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.115566 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-ttvch" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.134439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerStarted","Data":"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f"} Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.144288 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.183095 4869 scope.go:117] "RemoveContainer" containerID="9d24ac1d4cb800028d8b0cae08d3371a0141fabf6b8ee870243781d99e8bd219" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.257518 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.494692736 podStartE2EDuration="8.257488826s" podCreationTimestamp="2026-02-02 14:53:31 +0000 UTC" firstStartedPulling="2026-02-02 14:53:32.031808573 +0000 UTC m=+1213.676445343" lastFinishedPulling="2026-02-02 14:53:37.794604663 +0000 UTC m=+1219.439241433" observedRunningTime="2026-02-02 14:53:39.219123876 +0000 UTC m=+1220.863760646" watchObservedRunningTime="2026-02-02 14:53:39.257488826 +0000 UTC m=+1220.902125606" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.287310 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.293922 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-ttvch"] Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.476364 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" path="/var/lib/kubelet/pods/cc1dcc76-d41e-4492-95d0-dcbb0b1254b4/volumes" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.579351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:39 crc kubenswrapper[4869]: I0202 14:53:39.640143 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.420896 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-575599577-dmndq" Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.462611 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bbd64cf97-7t5h5" Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.557859 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.558208 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4d7559c7-79dhq" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" containerID="cri-o://a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023" gracePeriod=30 Feb 02 14:53:40 crc kubenswrapper[4869]: I0202 14:53:40.558825 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c4d7559c7-79dhq" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" containerID="cri-o://b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb" gracePeriod=30 Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.161262 4869 generic.go:334] "Generic (PLEG): container finished" podID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerID="b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb" exitCode=0 Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.161368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerDied","Data":"b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb"} Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.746118 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.827857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828398 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828547 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828589 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828787 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") pod \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\" (UID: \"a1598fcb-466e-4c4c-8429-1a211bfcfc19\") " Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.828813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.829549 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1598fcb-466e-4c4c-8429-1a211bfcfc19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.838094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts" (OuterVolumeSpecName: "scripts") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.838175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.865011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr" (OuterVolumeSpecName: "kube-api-access-8bpnr") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "kube-api-access-8bpnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.908590 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936379 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bpnr\" (UniqueName: \"kubernetes.io/projected/a1598fcb-466e-4c4c-8429-1a211bfcfc19-kube-api-access-8bpnr\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936711 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936798 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.936877 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:41 crc kubenswrapper[4869]: I0202 14:53:41.960061 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data" (OuterVolumeSpecName: "config-data") pod "a1598fcb-466e-4c4c-8429-1a211bfcfc19" (UID: "a1598fcb-466e-4c4c-8429-1a211bfcfc19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.042387 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1598fcb-466e-4c4c-8429-1a211bfcfc19-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174013 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" exitCode=0 Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174063 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerDied","Data":"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5"} Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1598fcb-466e-4c4c-8429-1a211bfcfc19","Type":"ContainerDied","Data":"803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2"} Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174112 4869 scope.go:117] "RemoveContainer" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.174215 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.219547 4869 scope.go:117] "RemoveContainer" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.220532 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.243031 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.256211 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257203 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257223 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257238 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="init" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257244 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="init" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257256 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257263 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.257674 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.257685 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.258063 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1dcc76-d41e-4492-95d0-dcbb0b1254b4" containerName="dnsmasq-dns" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.258097 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="probe" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.258117 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" containerName="cinder-scheduler" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.259827 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.263313 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.277551 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.282940 4869 scope.go:117] "RemoveContainer" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.289373 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2\": container with ID starting with 54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2 not found: ID does not exist" containerID="54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.289438 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2"} err="failed to get container status \"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2\": rpc error: code = NotFound desc = could not find container \"54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2\": container with ID starting with 54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2 not found: ID does not exist" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.289476 4869 scope.go:117] "RemoveContainer" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.296273 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5\": container with ID starting with b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5 not found: ID does not exist" containerID="b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.296348 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5"} err="failed to get container status \"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5\": rpc error: code = NotFound desc = could not find container \"b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5\": container with ID starting with b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5 not found: ID does not exist" Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371388 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-803147708c69b2f495d5e0819fb5fcae8a7b960c9ff123b14eea9ec0607d19e2: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371578 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-f2e7accbcbe637e8c09e5e8b0f36dc637fc3678eaf4f2f32a1c64ce436c7b4d7: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371751 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-conmon-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-conmon-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371781 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0c79bc_79ef_4876_b621_25ff976ecad2.slice/crio-e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371800 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.371818 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-c17e541a0391a5ab4d3af30807de60f3811e0be82d45f8bbf1f14e975c566295.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.373791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.373845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2sd2\" (UniqueName: \"kubernetes.io/projected/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-kube-api-access-v2sd2\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.374859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.379872 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.379976 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-b4214e4c538bbd300c2b7caadca2a67e6d81ad1496ea018295cd7c1692d153c5.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380002 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-conmon-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380029 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice/crio-3416c680c30ffa4504de170ac8df8282fe50ec110dfeec9a39aa4485ba40329e.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380315 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-conmon-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: W0202 14:53:42.380344 4869 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice/crio-54cd1f12240679ef8083080bad629ffd700a11a34091524bd58c21196d58acd2.scope: no such file or directory Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2sd2\" (UniqueName: \"kubernetes.io/projected/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-kube-api-access-v2sd2\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480367 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.480453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.490155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.490427 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.494665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.502506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-scripts\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.516901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2sd2\" (UniqueName: \"kubernetes.io/projected/d8f007a5-a428-44ff-8c6d-5de0d08beb7c-kube-api-access-v2sd2\") pod \"cinder-scheduler-0\" (UID: \"d8f007a5-a428-44ff-8c6d-5de0d08beb7c\") " pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: I0202 14:53:42.584502 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 14:53:42 crc kubenswrapper[4869]: E0202 14:53:42.615779 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c7f465_f9c2_4384_9c28_18d85ff08928.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7fa8424_d792_4e4f_bd02_d7369407b5ad.slice/crio-b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb918eb2a_3cab_422f_ba7d_f06c4ec21ef4.slice/crio-conmon-c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7fa8424_d792_4e4f_bd02_d7369407b5ad.slice/crio-conmon-b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1dcc76_d41e_4492_95d0_dcbb0b1254b4.slice/crio-639f3e360f9ddd038ae221692dc37d5fd4e73285294cd43b7766798c840cac55.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1598fcb_466e_4c4c_8429_1a211bfcfc19.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1dcc76_d41e_4492_95d0_dcbb0b1254b4.slice/crio-3c657898578c35c3ae5e782275a540a7d34bda1e6ddbf6ef9b56bdcd9ecc225c\": RecentStats: unable to find data in memory cache]" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.120464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bb87b4954-l5h9p_b918eb2a-3cab-422f-ba7d-f06c4ec21ef4/neutron-api/0.log" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.121139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189789 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bb87b4954-l5h9p_b918eb2a-3cab-422f-ba7d-f06c4ec21ef4/neutron-api/0.log" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189846 4869 generic.go:334] "Generic (PLEG): container finished" podID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" exitCode=137 Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerDied","Data":"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a"} Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb87b4954-l5h9p" event={"ID":"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4","Type":"ContainerDied","Data":"120c43304cec581dc8d0f93485a0a11dc2583d6103478c7dfda0d8888d486791"} Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.189963 4869 scope.go:117] "RemoveContainer" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.190199 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb87b4954-l5h9p" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.200486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") pod \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\" (UID: \"b918eb2a-3cab-422f-ba7d-f06c4ec21ef4\") " Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.210085 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj" (OuterVolumeSpecName: "kube-api-access-6wztj") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "kube-api-access-6wztj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.224259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.241123 4869 scope.go:117] "RemoveContainer" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.254737 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.265213 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.296686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config" (OuterVolumeSpecName: "config") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304161 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304202 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304214 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wztj\" (UniqueName: \"kubernetes.io/projected/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-kube-api-access-6wztj\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.304227 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.321264 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc5588748-k6f99" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.323681 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.342462 4869 scope.go:117] "RemoveContainer" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.343508 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d\": container with ID starting with 5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d not found: ID does not exist" containerID="5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.343571 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d"} err="failed to get container status \"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d\": rpc error: code = NotFound desc = could not find container \"5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d\": container with ID starting with 5ee833f43e68e30b4ec780092383d02b35ee0942ddf70a5b6c4b59c899dcce6d not found: ID does not exist" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.343611 4869 scope.go:117] "RemoveContainer" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.343929 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a\": container with ID starting with c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a not found: ID does not exist" containerID="c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.343952 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a"} err="failed to get container status \"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a\": rpc error: code = NotFound desc = could not find container \"c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a\": container with ID starting with c2a0397cf816d251f5f465037eee48a1c61cd596115c617f73970a11824c529a not found: ID does not exist" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.429022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" (UID: "b918eb2a-3cab-422f-ba7d-f06c4ec21ef4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.431239 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.431698 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-79c776b57b-76pd5" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" containerID="cri-o://cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" gracePeriod=30 Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.432273 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-79c776b57b-76pd5" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" containerID="cri-o://c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" gracePeriod=30 Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.497388 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1598fcb-466e-4c4c-8429-1a211bfcfc19" path="/var/lib/kubelet/pods/a1598fcb-466e-4c4c-8429-1a211bfcfc19/volumes" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.510641 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.511050 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" Feb 02 14:53:43 crc kubenswrapper[4869]: E0202 14:53:43.511118 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511131 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511326 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-api" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.511359 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" containerName="neutron-httpd" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.512689 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.516293 4869 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.522806 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.522886 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-v6krz" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.523472 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.542087 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.599750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.616784 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb87b4954-l5h9p"] Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config-secret\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mqr\" (UniqueName: \"kubernetes.io/projected/9c3c55b0-c9be-4635-9562-347406f90dff-kube-api-access-k6mqr\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619677 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.619711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config-secret\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721612 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mqr\" (UniqueName: \"kubernetes.io/projected/9c3c55b0-c9be-4635-9562-347406f90dff-kube-api-access-k6mqr\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.721718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.722983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.727732 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.730874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c3c55b0-c9be-4635-9562-347406f90dff-openstack-config-secret\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.745459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mqr\" (UniqueName: \"kubernetes.io/projected/9c3c55b0-c9be-4635-9562-347406f90dff-kube-api-access-k6mqr\") pod \"openstackclient\" (UID: \"9c3c55b0-c9be-4635-9562-347406f90dff\") " pod="openstack/openstackclient" Feb 02 14:53:43 crc kubenswrapper[4869]: I0202 14:53:43.890807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.292184 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" exitCode=143 Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.293609 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerDied","Data":"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d"} Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.296661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8f007a5-a428-44ff-8c6d-5de0d08beb7c","Type":"ContainerStarted","Data":"8a26183b7c7e9706d3d6df35e1fc3c81acb49df13d6ac4dddb74f90a9b0c75d8"} Feb 02 14:53:44 crc kubenswrapper[4869]: I0202 14:53:44.514762 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 14:53:44 crc kubenswrapper[4869]: W0202 14:53:44.514951 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c3c55b0_c9be_4635_9562_347406f90dff.slice/crio-298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01 WatchSource:0}: Error finding container 298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01: Status 404 returned error can't find the container with id 298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01 Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.313883 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8f007a5-a428-44ff-8c6d-5de0d08beb7c","Type":"ContainerStarted","Data":"906a2f9e990fbc8c5e19d425341489eff99b3f77f960f279696d24c68004ddda"} Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.314373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d8f007a5-a428-44ff-8c6d-5de0d08beb7c","Type":"ContainerStarted","Data":"291fc9b297d71d064b6e249c4f7f64024554cdfb1d9bed064aa5dd85c2bb63d6"} Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.317939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9c3c55b0-c9be-4635-9562-347406f90dff","Type":"ContainerStarted","Data":"298a55e7ac3d14d5a229c579fff16094e1a70a819d3fd2fbd748606633424f01"} Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.345999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.345972288 podStartE2EDuration="3.345972288s" podCreationTimestamp="2026-02-02 14:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:53:45.340272997 +0000 UTC m=+1226.984909767" watchObservedRunningTime="2026-02-02 14:53:45.345972288 +0000 UTC m=+1226.990609058" Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.477130 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b918eb2a-3cab-422f-ba7d-f06c4ec21ef4" path="/var/lib/kubelet/pods/b918eb2a-3cab-422f-ba7d-f06c4ec21ef4/volumes" Feb 02 14:53:45 crc kubenswrapper[4869]: I0202 14:53:45.589680 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.350893 4869 generic.go:334] "Generic (PLEG): container finished" podID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerID="a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023" exitCode=0 Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.351126 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerDied","Data":"a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023"} Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.541195 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613388 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613409 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613653 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.613965 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") pod \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\" (UID: \"c7fa8424-d792-4e4f-bd02-d7369407b5ad\") " Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.644668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.660212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf" (OuterVolumeSpecName: "kube-api-access-pfjmf") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "kube-api-access-pfjmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.687347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.708128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config" (OuterVolumeSpecName: "config") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.710128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.717274 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719396 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719420 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719433 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfjmf\" (UniqueName: \"kubernetes.io/projected/c7fa8424-d792-4e4f-bd02-d7369407b5ad-kube-api-access-pfjmf\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.719446 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.733413 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.763971 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c7fa8424-d792-4e4f-bd02-d7369407b5ad" (UID: "c7fa8424-d792-4e4f-bd02-d7369407b5ad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.822391 4869 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.823069 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7fa8424-d792-4e4f-bd02-d7369407b5ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:46 crc kubenswrapper[4869]: I0202 14:53:46.974794 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.030734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.030932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031185 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031799 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.031868 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") pod \"9a6e5980-cab0-4c02-9d50-0633106097cb\" (UID: \"9a6e5980-cab0-4c02-9d50-0633106097cb\") " Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.038990 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9" (OuterVolumeSpecName: "kube-api-access-kk2f9") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "kube-api-access-kk2f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.040873 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts" (OuterVolumeSpecName: "scripts") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.041631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs" (OuterVolumeSpecName: "logs") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.137380 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk2f9\" (UniqueName: \"kubernetes.io/projected/9a6e5980-cab0-4c02-9d50-0633106097cb-kube-api-access-kk2f9\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.137638 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.137700 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6e5980-cab0-4c02-9d50-0633106097cb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.154562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data" (OuterVolumeSpecName: "config-data") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.166116 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.226492 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.240686 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.240729 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.240744 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.245914 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9a6e5980-cab0-4c02-9d50-0633106097cb" (UID: "9a6e5980-cab0-4c02-9d50-0633106097cb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.345589 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6e5980-cab0-4c02-9d50-0633106097cb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.371602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4d7559c7-79dhq" event={"ID":"c7fa8424-d792-4e4f-bd02-d7369407b5ad","Type":"ContainerDied","Data":"45f00cd48b456ba32635e74b444d036ced51d5190a5131b65618e8664fdb1787"} Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.371676 4869 scope.go:117] "RemoveContainer" containerID="b98bb8ee9ab743526dde457cbb993e0cc438ea89b82e9ca013420866bee3d8bb" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.371679 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4d7559c7-79dhq" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376761 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" exitCode=0 Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376826 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79c776b57b-76pd5" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerDied","Data":"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115"} Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.376887 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79c776b57b-76pd5" event={"ID":"9a6e5980-cab0-4c02-9d50-0633106097cb","Type":"ContainerDied","Data":"f71f18fd5c51bc2ff8e4203c7e7213ae442d57834261ba22fc6581334d9a1f73"} Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.432828 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.438819 4869 scope.go:117] "RemoveContainer" containerID="a49c8a4164ff9e8005301591ccaba9e10c6d8a826a8348fe14a6ec69c3350023" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.446434 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-79c776b57b-76pd5"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.533717 4869 scope.go:117] "RemoveContainer" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.536744 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" path="/var/lib/kubelet/pods/9a6e5980-cab0-4c02-9d50-0633106097cb/volumes" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.537504 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.537535 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c4d7559c7-79dhq"] Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.585853 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.601155 4869 scope.go:117] "RemoveContainer" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.655125 4869 scope.go:117] "RemoveContainer" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" Feb 02 14:53:47 crc kubenswrapper[4869]: E0202 14:53:47.656236 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115\": container with ID starting with c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115 not found: ID does not exist" containerID="c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.656300 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115"} err="failed to get container status \"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115\": rpc error: code = NotFound desc = could not find container \"c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115\": container with ID starting with c3d91ba41d874d42a11ee9d5fdbd271ddbb7260947e7af4c7225a9b537289115 not found: ID does not exist" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.656365 4869 scope.go:117] "RemoveContainer" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" Feb 02 14:53:47 crc kubenswrapper[4869]: E0202 14:53:47.658189 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d\": container with ID starting with cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d not found: ID does not exist" containerID="cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d" Feb 02 14:53:47 crc kubenswrapper[4869]: I0202 14:53:47.658217 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d"} err="failed to get container status \"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d\": rpc error: code = NotFound desc = could not find container \"cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d\": container with ID starting with cecc6c80f6b87f40ab88c6b6852414fafb6b3eb0cd0837e67eb745a832ee094d not found: ID does not exist" Feb 02 14:53:49 crc kubenswrapper[4869]: I0202 14:53:49.474239 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" path="/var/lib/kubelet/pods/c7fa8424-d792-4e4f-bd02-d7369407b5ad/volumes" Feb 02 14:53:52 crc kubenswrapper[4869]: I0202 14:53:52.849479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 14:53:57 crc kubenswrapper[4869]: I0202 14:53:57.501135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9c3c55b0-c9be-4635-9562-347406f90dff","Type":"ContainerStarted","Data":"266c16280253b1077268ac63c782114a693c22a38707b7b1728ac8ec0d489988"} Feb 02 14:53:57 crc kubenswrapper[4869]: I0202 14:53:57.524449 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.591373014 podStartE2EDuration="14.524420244s" podCreationTimestamp="2026-02-02 14:53:43 +0000 UTC" firstStartedPulling="2026-02-02 14:53:44.524262787 +0000 UTC m=+1226.168899557" lastFinishedPulling="2026-02-02 14:53:56.457310017 +0000 UTC m=+1238.101946787" observedRunningTime="2026-02-02 14:53:57.520867256 +0000 UTC m=+1239.165504026" watchObservedRunningTime="2026-02-02 14:53:57.524420244 +0000 UTC m=+1239.169057014" Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.265405 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.265790 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" containerID="cri-o://494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.266413 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" containerID="cri-o://062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.266521 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" containerID="cri-o://5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.266586 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" containerID="cri-o://603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" gracePeriod=30 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.282788 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": EOF" Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.523128 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" exitCode=0 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.524323 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" exitCode=2 Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.525525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f"} Feb 02 14:53:58 crc kubenswrapper[4869]: I0202 14:53:58.525656 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876"} Feb 02 14:53:59 crc kubenswrapper[4869]: I0202 14:53:59.536996 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" exitCode=0 Feb 02 14:53:59 crc kubenswrapper[4869]: I0202 14:53:59.537083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689"} Feb 02 14:54:01 crc kubenswrapper[4869]: I0202 14:54:01.483179 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": dial tcp 10.217.0.157:3000: connect: connection refused" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.192332 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193440 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193466 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193520 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193528 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" Feb 02 14:54:02 crc kubenswrapper[4869]: E0202 14:54:02.193550 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193566 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193759 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-log" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193787 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193798 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fa8424-d792-4e4f-bd02-d7369407b5ad" containerName="neutron-httpd" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.193818 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6e5980-cab0-4c02-9d50-0633106097cb" containerName="placement-api" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.194812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.203880 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.287808 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.289936 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.299255 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.299818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.300133 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.300287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.307228 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.395256 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.399268 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402580 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402665 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.402739 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.403890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.404326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.410263 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.414105 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.418509 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.436281 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.443610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"nova-cell0-db-create-z9ktw\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.451566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"nova-api-db-create-9kpbk\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.527226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.527465 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.538184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.551820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.561793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.598765 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.614575 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.648191 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.650188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.659291 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.667972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.668068 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.668163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.668209 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.669497 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.669881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.672039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.701749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"nova-cell1-db-create-gssfn\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.702418 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"nova-api-68d6-account-create-update-6m8ng\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.737045 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.770455 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.770740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.820970 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.822824 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.833754 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.837254 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.872849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.873020 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.873106 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.873164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.874262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.895537 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.897667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"nova-cell0-e113-account-create-update-9fnwx\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.978743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.979391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:02 crc kubenswrapper[4869]: I0202 14:54:02.980739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.002405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"nova-cell1-74b0-account-create-update-mdkgh\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.081296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.189743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.327860 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.372390 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.386774 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1748ab6_c795_414c_a52b_7bf549358524.slice/crio-3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821 WatchSource:0}: Error finding container 3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821: Status 404 returned error can't find the container with id 3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821 Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.524965 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.534279 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7ca155_a072_4915_b5c5_e0b36a29af9b.slice/crio-16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa WatchSource:0}: Error finding container 16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa: Status 404 returned error can't find the container with id 16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.616738 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z9ktw" event={"ID":"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27","Type":"ContainerStarted","Data":"44b61834eee1c536aa0f35eec95eea4815501cb97e71d1d71bf2626e5b553f43"} Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.618081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerStarted","Data":"3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821"} Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.619796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gssfn" event={"ID":"dc7ca155-a072-4915-b5c5-e0b36a29af9b","Type":"ContainerStarted","Data":"16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa"} Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.716473 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c50ffbc_cc89_4adc_ae61_9100df4a3ba1.slice/crio-8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2 WatchSource:0}: Error finding container 8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2: Status 404 returned error can't find the container with id 8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2 Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.716498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.803345 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.830204 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdcf5e33_de9f_408f_8200_6f42fe0d0771.slice/crio-7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930 WatchSource:0}: Error finding container 7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930: Status 404 returned error can't find the container with id 7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930 Feb 02 14:54:03 crc kubenswrapper[4869]: I0202 14:54:03.912717 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 14:54:03 crc kubenswrapper[4869]: W0202 14:54:03.914685 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff7e998_18b9_4fbe_906a_d756f7cf16c6.slice/crio-3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253 WatchSource:0}: Error finding container 3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253: Status 404 returned error can't find the container with id 3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.253455 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.331717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.331866 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.331968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332058 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.332182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") pod \"aa9b6032-666f-44cb-849e-b82c50dc030a\" (UID: \"aa9b6032-666f-44cb-849e-b82c50dc030a\") " Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.334742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.340898 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.364238 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts" (OuterVolumeSpecName: "scripts") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.365178 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml" (OuterVolumeSpecName: "kube-api-access-h68ml") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "kube-api-access-h68ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.403154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438454 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438513 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438529 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438548 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa9b6032-666f-44cb-849e-b82c50dc030a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.438563 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h68ml\" (UniqueName: \"kubernetes.io/projected/aa9b6032-666f-44cb-849e-b82c50dc030a-kube-api-access-h68ml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.563299 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.632084 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data" (OuterVolumeSpecName: "config-data") pod "aa9b6032-666f-44cb-849e-b82c50dc030a" (UID: "aa9b6032-666f-44cb-849e-b82c50dc030a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.646980 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" exitCode=0 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647066 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa9b6032-666f-44cb-849e-b82c50dc030a","Type":"ContainerDied","Data":"2a3a8afa5f4f39b9c1443825049b785119a54a533b4cf3c5d4655fb9914dd6f0"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647155 4869 scope.go:117] "RemoveContainer" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.647925 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.648853 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9b6032-666f-44cb-849e-b82c50dc030a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.658386 4869 generic.go:334] "Generic (PLEG): container finished" podID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerID="48561ec38ba8e1d863e22aea7226f624c163b5e704dc9c40612b25be2fba3af4" exitCode=0 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.658543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z9ktw" event={"ID":"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27","Type":"ContainerDied","Data":"48561ec38ba8e1d863e22aea7226f624c163b5e704dc9c40612b25be2fba3af4"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.669131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerStarted","Data":"94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.685509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerStarted","Data":"99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.685566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerStarted","Data":"7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.694549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerStarted","Data":"d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.694630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerStarted","Data":"8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.700932 4869 generic.go:334] "Generic (PLEG): container finished" podID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerID="65c894d6caff283d8e12ca5ca2f52f63ea73a840cf785e78685f2636257f7088" exitCode=0 Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.701046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gssfn" event={"ID":"dc7ca155-a072-4915-b5c5-e0b36a29af9b","Type":"ContainerDied","Data":"65c894d6caff283d8e12ca5ca2f52f63ea73a840cf785e78685f2636257f7088"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.709821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerStarted","Data":"7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.709920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerStarted","Data":"3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253"} Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.721071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-9kpbk" podStartSLOduration=2.721038806 podStartE2EDuration="2.721038806s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.711615223 +0000 UTC m=+1246.356251993" watchObservedRunningTime="2026-02-02 14:54:04.721038806 +0000 UTC m=+1246.365675576" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.741372 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" podStartSLOduration=2.741343808 podStartE2EDuration="2.741343808s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.737342568 +0000 UTC m=+1246.381979348" watchObservedRunningTime="2026-02-02 14:54:04.741343808 +0000 UTC m=+1246.385980578" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.764813 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-68d6-account-create-update-6m8ng" podStartSLOduration=2.7647886379999997 podStartE2EDuration="2.764788638s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.757970239 +0000 UTC m=+1246.402607009" watchObservedRunningTime="2026-02-02 14:54:04.764788638 +0000 UTC m=+1246.409425398" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.810372 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" podStartSLOduration=2.810347764 podStartE2EDuration="2.810347764s" podCreationTimestamp="2026-02-02 14:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:54:04.801502965 +0000 UTC m=+1246.446139735" watchObservedRunningTime="2026-02-02 14:54:04.810347764 +0000 UTC m=+1246.454984524" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.893168 4869 scope.go:117] "RemoveContainer" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.905224 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.928671 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.937228 4869 scope.go:117] "RemoveContainer" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964126 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964819 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964836 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964856 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964867 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964890 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: E0202 14:54:04.964931 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.964943 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965173 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="sg-core" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965201 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-notification-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965216 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="proxy-httpd" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.965229 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" containerName="ceilometer-central-agent" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.970386 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.986742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.988370 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:54:04 crc kubenswrapper[4869]: I0202 14:54:04.988456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.000213 4869 scope.go:117] "RemoveContainer" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.054695 4869 scope.go:117] "RemoveContainer" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.055545 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f\": container with ID starting with 062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f not found: ID does not exist" containerID="062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.055585 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f"} err="failed to get container status \"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f\": rpc error: code = NotFound desc = could not find container \"062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f\": container with ID starting with 062bd89b43d26abcd5f42ca3505659bf4f657ea5714a9b28b15216884611253f not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.055611 4869 scope.go:117] "RemoveContainer" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.056356 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876\": container with ID starting with 603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876 not found: ID does not exist" containerID="603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.056431 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876"} err="failed to get container status \"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876\": rpc error: code = NotFound desc = could not find container \"603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876\": container with ID starting with 603bf7cc83bd536f08cdf14056d15ebc288d3e5609b0f3ce33ff06ebfe779876 not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.056470 4869 scope.go:117] "RemoveContainer" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.057377 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505\": container with ID starting with 5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505 not found: ID does not exist" containerID="5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.057497 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505"} err="failed to get container status \"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505\": rpc error: code = NotFound desc = could not find container \"5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505\": container with ID starting with 5c446a3c772b23388423d24f802d0b8bebb7fc2fb95373a163d9cd99afb44505 not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.057546 4869 scope.go:117] "RemoveContainer" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" Feb 02 14:54:05 crc kubenswrapper[4869]: E0202 14:54:05.059229 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689\": container with ID starting with 494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689 not found: ID does not exist" containerID="494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.059585 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689"} err="failed to get container status \"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689\": rpc error: code = NotFound desc = could not find container \"494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689\": container with ID starting with 494d97102f19abb856fda0075c9c6b0665c021129085d9e2f00bb06f2c4df689 not found: ID does not exist" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061833 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.061970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.062014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.062038 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166537 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.166737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.167247 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.167713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.172299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.173389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.173589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.174121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.185781 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"ceilometer-0\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.307281 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.475149 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9b6032-666f-44cb-849e-b82c50dc030a" path="/var/lib/kubelet/pods/aa9b6032-666f-44cb-849e-b82c50dc030a/volumes" Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.725932 4869 generic.go:334] "Generic (PLEG): container finished" podID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerID="d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.726615 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerDied","Data":"d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.734358 4869 generic.go:334] "Generic (PLEG): container finished" podID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerID="7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.734443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerDied","Data":"7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.747528 4869 generic.go:334] "Generic (PLEG): container finished" podID="b1748ab6-c795-414c-a52b-7bf549358524" containerID="94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.747591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerDied","Data":"94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.752095 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerID="99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb" exitCode=0 Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.752344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerDied","Data":"99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb"} Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.813163 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:05 crc kubenswrapper[4869]: I0202 14:54:05.899209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:05 crc kubenswrapper[4869]: W0202 14:54:05.901113 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57ed2c6_7be3_4db2_919b_6cc161df175a.slice/crio-a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b WatchSource:0}: Error finding container a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b: Status 404 returned error can't find the container with id a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.292170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.302712 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.401538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") pod \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.402876 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") pod \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.402990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") pod \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\" (UID: \"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.403096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") pod \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\" (UID: \"dc7ca155-a072-4915-b5c5-e0b36a29af9b\") " Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.404149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc7ca155-a072-4915-b5c5-e0b36a29af9b" (UID: "dc7ca155-a072-4915-b5c5-e0b36a29af9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.404684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" (UID: "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.409850 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp" (OuterVolumeSpecName: "kube-api-access-nrvbp") pod "dc7ca155-a072-4915-b5c5-e0b36a29af9b" (UID: "dc7ca155-a072-4915-b5c5-e0b36a29af9b"). InnerVolumeSpecName "kube-api-access-nrvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.411925 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68" (OuterVolumeSpecName: "kube-api-access-h9p68") pod "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" (UID: "d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27"). InnerVolumeSpecName "kube-api-access-h9p68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510570 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9p68\" (UniqueName: \"kubernetes.io/projected/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-kube-api-access-h9p68\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510612 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrvbp\" (UniqueName: \"kubernetes.io/projected/dc7ca155-a072-4915-b5c5-e0b36a29af9b-kube-api-access-nrvbp\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510624 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.510633 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7ca155-a072-4915-b5c5-e0b36a29af9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.773118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-gssfn" event={"ID":"dc7ca155-a072-4915-b5c5-e0b36a29af9b","Type":"ContainerDied","Data":"16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.773643 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16abdcce7c9d8fffb1a0d6b6dfc3f18aa5820eb639be32fcbe216b0810ee9afa" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.773170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-gssfn" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.776222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.776283 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.791240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z9ktw" event={"ID":"d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27","Type":"ContainerDied","Data":"44b61834eee1c536aa0f35eec95eea4815501cb97e71d1d71bf2626e5b553f43"} Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.791333 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b61834eee1c536aa0f35eec95eea4815501cb97e71d1d71bf2626e5b553f43" Feb 02 14:54:06 crc kubenswrapper[4869]: I0202 14:54:06.791499 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z9ktw" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.165499 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.335538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") pod \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.335593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") pod \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\" (UID: \"bdcf5e33-de9f-408f-8200-6f42fe0d0771\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.336763 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdcf5e33-de9f-408f-8200-6f42fe0d0771" (UID: "bdcf5e33-de9f-408f-8200-6f42fe0d0771"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.349389 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv" (OuterVolumeSpecName: "kube-api-access-rrrgv") pod "bdcf5e33-de9f-408f-8200-6f42fe0d0771" (UID: "bdcf5e33-de9f-408f-8200-6f42fe0d0771"). InnerVolumeSpecName "kube-api-access-rrrgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.386772 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.400100 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.412414 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.438807 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") pod \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.438980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") pod \"b1748ab6-c795-414c-a52b-7bf549358524\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439012 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") pod \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") pod \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\" (UID: \"0ff7e998-18b9-4fbe-906a-d756f7cf16c6\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") pod \"b1748ab6-c795-414c-a52b-7bf549358524\" (UID: \"b1748ab6-c795-414c-a52b-7bf549358524\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") pod \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\" (UID: \"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1\") " Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439585 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdcf5e33-de9f-408f-8200-6f42fe0d0771-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.439599 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrrgv\" (UniqueName: \"kubernetes.io/projected/bdcf5e33-de9f-408f-8200-6f42fe0d0771-kube-api-access-rrrgv\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.440474 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1748ab6-c795-414c-a52b-7bf549358524" (UID: "b1748ab6-c795-414c-a52b-7bf549358524"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.440622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ff7e998-18b9-4fbe-906a-d756f7cf16c6" (UID: "0ff7e998-18b9-4fbe-906a-d756f7cf16c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.441234 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" (UID: "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.451115 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw" (OuterVolumeSpecName: "kube-api-access-7h8fw") pod "0ff7e998-18b9-4fbe-906a-d756f7cf16c6" (UID: "0ff7e998-18b9-4fbe-906a-d756f7cf16c6"). InnerVolumeSpecName "kube-api-access-7h8fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.451186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz" (OuterVolumeSpecName: "kube-api-access-k8trz") pod "b1748ab6-c795-414c-a52b-7bf549358524" (UID: "b1748ab6-c795-414c-a52b-7bf549358524"). InnerVolumeSpecName "kube-api-access-k8trz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.454588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm" (OuterVolumeSpecName: "kube-api-access-n66bm") pod "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" (UID: "2c50ffbc-cc89-4adc-ae61-9100df4a3ba1"). InnerVolumeSpecName "kube-api-access-n66bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541158 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541209 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8trz\" (UniqueName: \"kubernetes.io/projected/b1748ab6-c795-414c-a52b-7bf549358524-kube-api-access-k8trz\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541227 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8fw\" (UniqueName: \"kubernetes.io/projected/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-kube-api-access-7h8fw\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541239 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff7e998-18b9-4fbe-906a-d756f7cf16c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541251 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1748ab6-c795-414c-a52b-7bf549358524-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.541266 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n66bm\" (UniqueName: \"kubernetes.io/projected/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1-kube-api-access-n66bm\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.810331 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9kpbk" event={"ID":"b1748ab6-c795-414c-a52b-7bf549358524","Type":"ContainerDied","Data":"3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.810786 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3223ee9128b45278d2cf015b5565d774794933be2923298ca4b9334c46d73821" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.810389 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9kpbk" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.813238 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.813233 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e113-account-create-update-9fnwx" event={"ID":"bdcf5e33-de9f-408f-8200-6f42fe0d0771","Type":"ContainerDied","Data":"7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.813307 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe91343442ba48a2f9af62c7c902364bc8241cef20f0011be017eeafe9b8930" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.816534 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-68d6-account-create-update-6m8ng" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.816572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-68d6-account-create-update-6m8ng" event={"ID":"2c50ffbc-cc89-4adc-ae61-9100df4a3ba1","Type":"ContainerDied","Data":"8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.816643 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f214be767be9a8c5b7e5ce690e1c3c71f7b105f98175fe20838d00f38f001c2" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.827130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" event={"ID":"0ff7e998-18b9-4fbe-906a-d756f7cf16c6","Type":"ContainerDied","Data":"3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253"} Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.827187 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b97f661296f961384d6ffa305b171af45cf7fd5f3070184b2800bf476b6c253" Feb 02 14:54:07 crc kubenswrapper[4869]: I0202 14:54:07.827277 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-74b0-account-create-update-mdkgh" Feb 02 14:54:08 crc kubenswrapper[4869]: I0202 14:54:08.841025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5"} Feb 02 14:54:09 crc kubenswrapper[4869]: I0202 14:54:09.852569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f"} Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.872742 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerStarted","Data":"5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e"} Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873085 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" containerID="cri-o://5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873095 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" containerID="cri-o://2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873097 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" containerID="cri-o://ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873138 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" containerID="cri-o://387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37" gracePeriod=30 Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.873704 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:54:11 crc kubenswrapper[4869]: I0202 14:54:11.905199 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.375821266 podStartE2EDuration="7.905169238s" podCreationTimestamp="2026-02-02 14:54:04 +0000 UTC" firstStartedPulling="2026-02-02 14:54:05.907007163 +0000 UTC m=+1247.551643943" lastFinishedPulling="2026-02-02 14:54:11.436355155 +0000 UTC m=+1253.080991915" observedRunningTime="2026-02-02 14:54:11.900644646 +0000 UTC m=+1253.545281426" watchObservedRunningTime="2026-02-02 14:54:11.905169238 +0000 UTC m=+1253.549806008" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.819279 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820146 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1748ab6-c795-414c-a52b-7bf549358524" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820166 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1748ab6-c795-414c-a52b-7bf549358524" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820188 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820195 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820209 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820216 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820227 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820234 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820249 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820255 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: E0202 14:54:12.820267 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820273 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820429 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1748ab6-c795-414c-a52b-7bf549358524" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820444 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820464 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820475 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" containerName="mariadb-database-create" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820485 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.820494 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" containerName="mariadb-account-create-update" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.821149 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.824085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.824085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wfkgs" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.824388 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.836492 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899135 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e" exitCode=0 Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899194 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f" exitCode=2 Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899204 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5" exitCode=0 Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e"} Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899300 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f"} Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.899315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5"} Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:12 crc kubenswrapper[4869]: I0202 14:54:12.964794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.067902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.067999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.068043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.068078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.075005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.076554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.077106 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.089441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"nova-cell0-conductor-db-sync-s5pkh\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.140621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.652081 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 14:54:13 crc kubenswrapper[4869]: I0202 14:54:13.911306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerStarted","Data":"cd4c7fb90fab4fd4c0d2e3de0824c4a040e7e86423a38a960666cd32c520f1dd"} Feb 02 14:54:18 crc kubenswrapper[4869]: I0202 14:54:18.988387 4869 generic.go:334] "Generic (PLEG): container finished" podID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerID="387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37" exitCode=0 Feb 02 14:54:18 crc kubenswrapper[4869]: I0202 14:54:18.988474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37"} Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.025646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d57ed2c6-7be3-4db2-919b-6cc161df175a","Type":"ContainerDied","Data":"a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b"} Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.026506 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a28c644f6d68e1684799139f17b3db6f5814ea993ff803148c4dbc8da259e61b" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.082947 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284966 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.284996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285475 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285520 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") pod \"d57ed2c6-7be3-4db2-919b-6cc161df175a\" (UID: \"d57ed2c6-7be3-4db2-919b-6cc161df175a\") " Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285756 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.285956 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.286199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.289600 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts" (OuterVolumeSpecName: "scripts") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.289747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws" (OuterVolumeSpecName: "kube-api-access-98jws") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "kube-api-access-98jws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.314757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.368397 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.383744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data" (OuterVolumeSpecName: "config-data") pod "d57ed2c6-7be3-4db2-919b-6cc161df175a" (UID: "d57ed2c6-7be3-4db2-919b-6cc161df175a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387329 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387390 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387407 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d57ed2c6-7be3-4db2-919b-6cc161df175a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387418 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387431 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jws\" (UniqueName: \"kubernetes.io/projected/d57ed2c6-7be3-4db2-919b-6cc161df175a-kube-api-access-98jws\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:22 crc kubenswrapper[4869]: I0202 14:54:22.387443 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57ed2c6-7be3-4db2-919b-6cc161df175a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.055661 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.055992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerStarted","Data":"ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8"} Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.106489 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" podStartSLOduration=2.955524879 podStartE2EDuration="11.106463219s" podCreationTimestamp="2026-02-02 14:54:12 +0000 UTC" firstStartedPulling="2026-02-02 14:54:13.660794212 +0000 UTC m=+1255.305430982" lastFinishedPulling="2026-02-02 14:54:21.811732542 +0000 UTC m=+1263.456369322" observedRunningTime="2026-02-02 14:54:23.088031163 +0000 UTC m=+1264.732667933" watchObservedRunningTime="2026-02-02 14:54:23.106463219 +0000 UTC m=+1264.751099989" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.126949 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.139212 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.152178 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153231 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153367 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153459 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153543 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153629 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153695 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: E0202 14:54:23.153797 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.153863 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154217 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="proxy-httpd" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154316 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-central-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154413 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="sg-core" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.154488 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" containerName="ceilometer-notification-agent" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.156568 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.161657 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.161979 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.163080 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.209405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.209795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.209966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210601 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.210701 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311625 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311756 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.311951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.312384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.313511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.319145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.319291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.326754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.329865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.336635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"ceilometer-0\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " pod="openstack/ceilometer-0" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.481142 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57ed2c6-7be3-4db2-919b-6cc161df175a" path="/var/lib/kubelet/pods/d57ed2c6-7be3-4db2-919b-6cc161df175a/volumes" Feb 02 14:54:23 crc kubenswrapper[4869]: I0202 14:54:23.493963 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.010987 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.024939 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.073176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"dbafcb0e5e084df3fe80d818d3e6101e9afd6d736ce2a1f056810697e37884cd"} Feb 02 14:54:24 crc kubenswrapper[4869]: I0202 14:54:24.445503 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:25 crc kubenswrapper[4869]: I0202 14:54:25.087292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5"} Feb 02 14:54:26 crc kubenswrapper[4869]: I0202 14:54:26.100393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588"} Feb 02 14:54:29 crc kubenswrapper[4869]: I0202 14:54:29.140278 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842"} Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerStarted","Data":"0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b"} Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189878 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189582 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" containerID="cri-o://e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189421 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" containerID="cri-o://0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189363 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" containerID="cri-o://ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.189515 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" containerID="cri-o://674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842" gracePeriod=30 Feb 02 14:54:34 crc kubenswrapper[4869]: I0202 14:54:34.228431 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.864205765 podStartE2EDuration="11.228403966s" podCreationTimestamp="2026-02-02 14:54:23 +0000 UTC" firstStartedPulling="2026-02-02 14:54:24.024523261 +0000 UTC m=+1265.669160031" lastFinishedPulling="2026-02-02 14:54:33.388721472 +0000 UTC m=+1275.033358232" observedRunningTime="2026-02-02 14:54:34.218835439 +0000 UTC m=+1275.863472219" watchObservedRunningTime="2026-02-02 14:54:34.228403966 +0000 UTC m=+1275.873040736" Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202890 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b" exitCode=0 Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202973 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842" exitCode=2 Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b"} Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.203038 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842"} Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.203054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5"} Feb 02 14:54:35 crc kubenswrapper[4869]: I0202 14:54:35.202985 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5" exitCode=0 Feb 02 14:54:36 crc kubenswrapper[4869]: I0202 14:54:36.219434 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f88376b-53a4-4124-abbe-510899dd905e" containerID="e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588" exitCode=0 Feb 02 14:54:36 crc kubenswrapper[4869]: I0202 14:54:36.219512 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588"} Feb 02 14:54:36 crc kubenswrapper[4869]: I0202 14:54:36.959147 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116687 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.116833 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") pod \"2f88376b-53a4-4124-abbe-510899dd905e\" (UID: \"2f88376b-53a4-4124-abbe-510899dd905e\") " Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.117165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.117385 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.117424 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.127279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts" (OuterVolumeSpecName: "scripts") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.129297 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2" (OuterVolumeSpecName: "kube-api-access-vc5r2") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "kube-api-access-vc5r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.152004 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.194683 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219283 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219499 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219577 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f88376b-53a4-4124-abbe-510899dd905e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219641 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc5r2\" (UniqueName: \"kubernetes.io/projected/2f88376b-53a4-4124-abbe-510899dd905e-kube-api-access-vc5r2\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219711 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.219871 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data" (OuterVolumeSpecName: "config-data") pod "2f88376b-53a4-4124-abbe-510899dd905e" (UID: "2f88376b-53a4-4124-abbe-510899dd905e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.233623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f88376b-53a4-4124-abbe-510899dd905e","Type":"ContainerDied","Data":"dbafcb0e5e084df3fe80d818d3e6101e9afd6d736ce2a1f056810697e37884cd"} Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.234345 4869 scope.go:117] "RemoveContainer" containerID="0aaad636e0b0b41c66ebfc025453847fc5cb7525651530ba40d1e9e1d8c2921b" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.233998 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.287532 4869 scope.go:117] "RemoveContainer" containerID="674a76b59e09250e5f6455be0b5e6a02246b59517a96c5bb55567c5075e79842" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.301724 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.313188 4869 scope.go:117] "RemoveContainer" containerID="e623de2b7ed48ab4ce9f04e64b2608ecb14c86b34a360e12d6beb22840326588" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.315943 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.326931 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f88376b-53a4-4124-abbe-510899dd905e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347005 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347676 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347702 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347751 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347762 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347787 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347794 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" Feb 02 14:54:37 crc kubenswrapper[4869]: E0202 14:54:37.347818 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.347828 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348168 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="proxy-httpd" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348201 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="sg-core" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348225 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-notification-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.348244 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88376b-53a4-4124-abbe-510899dd905e" containerName="ceilometer-central-agent" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.350971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.354749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.358217 4869 scope.go:117] "RemoveContainer" containerID="ae270a4d73dc72d33600de98bf17127a5aee5f52abcd06ac77c3e552235ac3a5" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.358839 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.359865 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.481264 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f88376b-53a4-4124-abbe-510899dd905e" path="/var/lib/kubelet/pods/2f88376b-53a4-4124-abbe-510899dd905e/volumes" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.530673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.530758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.530783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531431 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.531527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.634936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.635870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.636055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.643115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.643210 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.644876 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.645124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.652862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"ceilometer-0\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " pod="openstack/ceilometer-0" Feb 02 14:54:37 crc kubenswrapper[4869]: I0202 14:54:37.687483 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:54:38 crc kubenswrapper[4869]: I0202 14:54:38.150667 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:54:38 crc kubenswrapper[4869]: I0202 14:54:38.244718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"2ee7ad043782b76a75c638017ecf8eb737d1dae5d41ae89149f1f57042e858c0"} Feb 02 14:54:39 crc kubenswrapper[4869]: I0202 14:54:39.260205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294"} Feb 02 14:54:40 crc kubenswrapper[4869]: I0202 14:54:40.276119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20"} Feb 02 14:54:41 crc kubenswrapper[4869]: I0202 14:54:41.291505 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad"} Feb 02 14:54:45 crc kubenswrapper[4869]: I0202 14:54:45.332749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerStarted","Data":"247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be"} Feb 02 14:54:45 crc kubenswrapper[4869]: I0202 14:54:45.365525 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.219052887 podStartE2EDuration="8.365483668s" podCreationTimestamp="2026-02-02 14:54:37 +0000 UTC" firstStartedPulling="2026-02-02 14:54:38.158405579 +0000 UTC m=+1279.803042349" lastFinishedPulling="2026-02-02 14:54:44.30483637 +0000 UTC m=+1285.949473130" observedRunningTime="2026-02-02 14:54:45.35706738 +0000 UTC m=+1287.001704170" watchObservedRunningTime="2026-02-02 14:54:45.365483668 +0000 UTC m=+1287.010120438" Feb 02 14:54:46 crc kubenswrapper[4869]: I0202 14:54:46.342621 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:54:59 crc kubenswrapper[4869]: I0202 14:54:59.474835 4869 generic.go:334] "Generic (PLEG): container finished" podID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerID="ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8" exitCode=0 Feb 02 14:54:59 crc kubenswrapper[4869]: I0202 14:54:59.474938 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerDied","Data":"ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8"} Feb 02 14:55:00 crc kubenswrapper[4869]: I0202 14:55:00.922049 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088278 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.088667 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") pod \"100a5963-124e-4354-8b5a-fadefef2a0a4\" (UID: \"100a5963-124e-4354-8b5a-fadefef2a0a4\") " Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.096760 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts" (OuterVolumeSpecName: "scripts") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.097202 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6" (OuterVolumeSpecName: "kube-api-access-zhzn6") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "kube-api-access-zhzn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.119813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data" (OuterVolumeSpecName: "config-data") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.122406 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "100a5963-124e-4354-8b5a-fadefef2a0a4" (UID: "100a5963-124e-4354-8b5a-fadefef2a0a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191542 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191606 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhzn6\" (UniqueName: \"kubernetes.io/projected/100a5963-124e-4354-8b5a-fadefef2a0a4-kube-api-access-zhzn6\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191622 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.191634 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100a5963-124e-4354-8b5a-fadefef2a0a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.501301 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.501328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s5pkh" event={"ID":"100a5963-124e-4354-8b5a-fadefef2a0a4","Type":"ContainerDied","Data":"cd4c7fb90fab4fd4c0d2e3de0824c4a040e7e86423a38a960666cd32c520f1dd"} Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.502084 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4c7fb90fab4fd4c0d2e3de0824c4a040e7e86423a38a960666cd32c520f1dd" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.607266 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 14:55:01 crc kubenswrapper[4869]: E0202 14:55:01.607731 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerName="nova-cell0-conductor-db-sync" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.607754 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerName="nova-cell0-conductor-db-sync" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.607973 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" containerName="nova-cell0-conductor-db-sync" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.608674 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.614015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wfkgs" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.614081 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.623005 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.703892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.704104 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fsx\" (UniqueName: \"kubernetes.io/projected/87abe16e-c4e3-4869-8f9e-6f9b46106c51-kube-api-access-s8fsx\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.704228 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.805876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.806073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.806137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fsx\" (UniqueName: \"kubernetes.io/projected/87abe16e-c4e3-4869-8f9e-6f9b46106c51-kube-api-access-s8fsx\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.811008 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.811486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87abe16e-c4e3-4869-8f9e-6f9b46106c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.827110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fsx\" (UniqueName: \"kubernetes.io/projected/87abe16e-c4e3-4869-8f9e-6f9b46106c51-kube-api-access-s8fsx\") pod \"nova-cell0-conductor-0\" (UID: \"87abe16e-c4e3-4869-8f9e-6f9b46106c51\") " pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:01 crc kubenswrapper[4869]: I0202 14:55:01.937438 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:02 crc kubenswrapper[4869]: I0202 14:55:02.418440 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 14:55:02 crc kubenswrapper[4869]: I0202 14:55:02.520353 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"87abe16e-c4e3-4869-8f9e-6f9b46106c51","Type":"ContainerStarted","Data":"510d3cd9cfeb8407252b63cdc3df3a7e1fe5b732180ef10f604fe381970cc172"} Feb 02 14:55:03 crc kubenswrapper[4869]: I0202 14:55:03.532079 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"87abe16e-c4e3-4869-8f9e-6f9b46106c51","Type":"ContainerStarted","Data":"582753a8e542fb7ee4048af3bb221d1c4681b0c6141b86732bb4af1a53b70250"} Feb 02 14:55:03 crc kubenswrapper[4869]: I0202 14:55:03.590089 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.590055423 podStartE2EDuration="2.590055423s" podCreationTimestamp="2026-02-02 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:03.577269177 +0000 UTC m=+1305.221905947" watchObservedRunningTime="2026-02-02 14:55:03.590055423 +0000 UTC m=+1305.234692193" Feb 02 14:55:04 crc kubenswrapper[4869]: I0202 14:55:04.541733 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:07 crc kubenswrapper[4869]: I0202 14:55:07.695039 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 14:55:10 crc kubenswrapper[4869]: I0202 14:55:10.674464 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:10 crc kubenswrapper[4869]: I0202 14:55:10.675034 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" containerID="cri-o://ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.237816 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.423616 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") pod \"52d7887e-0487-4179-a0af-6f51b9eed8e7\" (UID: \"52d7887e-0487-4179-a0af-6f51b9eed8e7\") " Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.431407 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j" (OuterVolumeSpecName: "kube-api-access-jsw9j") pod "52d7887e-0487-4179-a0af-6f51b9eed8e7" (UID: "52d7887e-0487-4179-a0af-6f51b9eed8e7"). InnerVolumeSpecName "kube-api-access-jsw9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.526464 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsw9j\" (UniqueName: \"kubernetes.io/projected/52d7887e-0487-4179-a0af-6f51b9eed8e7-kube-api-access-jsw9j\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.612988 4869 generic.go:334] "Generic (PLEG): container finished" podID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" exitCode=2 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613060 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerDied","Data":"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3"} Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613088 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613113 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"52d7887e-0487-4179-a0af-6f51b9eed8e7","Type":"ContainerDied","Data":"be9a2fdb7d45a1c90ea28ef9b6fb56b710dc21be6216b1609bd3f6c8c02e9103"} Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.613140 4869 scope.go:117] "RemoveContainer" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.641447 4869 scope.go:117] "RemoveContainer" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" Feb 02 14:55:11 crc kubenswrapper[4869]: E0202 14:55:11.642072 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3\": container with ID starting with ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3 not found: ID does not exist" containerID="ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.642132 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3"} err="failed to get container status \"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3\": rpc error: code = NotFound desc = could not find container \"ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3\": container with ID starting with ff25f2ca5d1d049cd01a84be68a0b72bd4e602385612b71759188ace60b6e2f3 not found: ID does not exist" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.642216 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.654934 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.677894 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: E0202 14:55:11.682966 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.682998 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.683195 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" containerName="kube-state-metrics" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.683865 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.687640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.687851 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.697005 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833268 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.833537 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7zx\" (UniqueName: \"kubernetes.io/projected/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-api-access-lr7zx\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.838734 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.839364 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" containerID="cri-o://94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.840053 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" containerID="cri-o://53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.840060 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" containerID="cri-o://247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.840783 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" containerID="cri-o://03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad" gracePeriod=30 Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.938453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.939098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.939282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.939443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7zx\" (UniqueName: \"kubernetes.io/projected/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-api-access-lr7zx\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.945365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.953798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.954346 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c78d1b99-1b30-416f-9afc-3dda8204e757-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.963851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7zx\" (UniqueName: \"kubernetes.io/projected/c78d1b99-1b30-416f-9afc-3dda8204e757-kube-api-access-lr7zx\") pod \"kube-state-metrics-0\" (UID: \"c78d1b99-1b30-416f-9afc-3dda8204e757\") " pod="openstack/kube-state-metrics-0" Feb 02 14:55:11 crc kubenswrapper[4869]: I0202 14:55:11.993056 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.005735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.541001 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: W0202 14:55:12.542982 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc78d1b99_1b30_416f_9afc_3dda8204e757.slice/crio-cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9 WatchSource:0}: Error finding container cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9: Status 404 returned error can't find the container with id cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.625206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c78d1b99-1b30-416f-9afc-3dda8204e757","Type":"ContainerStarted","Data":"cae1aeeb25f5633f3f70367ef86ad6aa92025f4c803c8bb1901a57265bae83e9"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631605 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be" exitCode=0 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631652 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad" exitCode=2 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631665 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294" exitCode=0 Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631686 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631709 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.631719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294"} Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.702945 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.704487 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.709825 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.710441 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.713475 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.853778 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.855182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868772 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.868843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.869133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.912589 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.949035 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.952857 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.966022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977950 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.977974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.978012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:12 crc kubenswrapper[4869]: I0202 14:55:12.978078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.001542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.006971 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.012221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.023075 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.056799 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"nova-cell0-cell-mapping-2bx2t\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.075818 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.078498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081697 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.081878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.082011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.082138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.089986 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.105101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.115506 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.152606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.159118 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.184856 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.184956 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.185222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.191431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.205451 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.217142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"nova-scheduler-0\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.219601 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.226886 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.230201 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.231665 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.235210 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.289229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.292780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.293280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.293560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.299931 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.308782 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.311647 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.314336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.334541 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"nova-api-0\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.337670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.405221 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.407293 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.414674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.415321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.415711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.415889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.432122 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.495771 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d7887e-0487-4179-a0af-6f51b9eed8e7" path="/var/lib/kubelet/pods/52d7887e-0487-4179-a0af-6f51b9eed8e7/volumes" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517666 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517825 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.517888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.525416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.533728 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.534615 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.557775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"nova-metadata-0\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.608748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.619683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.619747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.619781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.620017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.620056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.620537 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.621298 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.622481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.622505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.625162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.662251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c78d1b99-1b30-416f-9afc-3dda8204e757","Type":"ContainerStarted","Data":"cdb90b94df8a6b5eaccd1c3364bfc4782ff72f3abb60923d8194df14a63b981d"} Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.664132 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.664800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"dnsmasq-dns-8b8cf6657-sfvmp\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.763685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.854175 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.373956972 podStartE2EDuration="2.854151477s" podCreationTimestamp="2026-02-02 14:55:11 +0000 UTC" firstStartedPulling="2026-02-02 14:55:12.546262125 +0000 UTC m=+1314.190898895" lastFinishedPulling="2026-02-02 14:55:13.02645663 +0000 UTC m=+1314.671093400" observedRunningTime="2026-02-02 14:55:13.693328961 +0000 UTC m=+1315.337965731" watchObservedRunningTime="2026-02-02 14:55:13.854151477 +0000 UTC m=+1315.498788247" Feb 02 14:55:13 crc kubenswrapper[4869]: I0202 14:55:13.859292 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.016069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.019253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.025548 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.025670 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.042944 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.112818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.166597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.178158 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.271953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.281367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.285032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.293718 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.295291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"nova-cell1-conductor-db-sync-bfr68\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.378751 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.436303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.556467 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.584039 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:14 crc kubenswrapper[4869]: W0202 14:55:14.678458 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddabd5514_892f_4f35_a9ca_2bf4cde0f5f5.slice/crio-db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7 WatchSource:0}: Error finding container db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7: Status 404 returned error can't find the container with id db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7 Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.717377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerStarted","Data":"8a6758018e930eb35d181b72a0bf4424ef8cce214eee1037a29cee9e990a3ae0"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.722740 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerStarted","Data":"0f50f5a7419043a9c8e4096aa4798378e9fbf6f1d58cf6115d2fbee8f617e5fe"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.729754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerStarted","Data":"5be81fda9a826f7e54ad4ca6e6d929236a63542303c28bf9d0e22fa1ebc93458"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.733764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerStarted","Data":"51ac651ddd93f893e6d3273b647d0ad831e6db906a9c89298fdc003ced36fdc1"} Feb 02 14:55:14 crc kubenswrapper[4869]: I0202 14:55:14.743474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerStarted","Data":"9e1c8170bbe27458021229751e306804c8d9eb43efb07049fd479764776f395c"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.230251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.304292 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.304367 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.759983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerStarted","Data":"b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.760475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerStarted","Data":"f3ee909b4bcfcda6fe199a0eb7bb5f83a5693cde99ca407a1e05e7fdc864bdd9"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.772024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerStarted","Data":"38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.775101 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerStarted","Data":"db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.778540 4869 generic.go:334] "Generic (PLEG): container finished" podID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" exitCode=0 Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.779590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerDied","Data":"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c"} Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.785716 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bfr68" podStartSLOduration=2.785672281 podStartE2EDuration="2.785672281s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:15.780343879 +0000 UTC m=+1317.424980669" watchObservedRunningTime="2026-02-02 14:55:15.785672281 +0000 UTC m=+1317.430309051" Feb 02 14:55:15 crc kubenswrapper[4869]: I0202 14:55:15.809206 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2bx2t" podStartSLOduration=3.809172062 podStartE2EDuration="3.809172062s" podCreationTimestamp="2026-02-02 14:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:15.797035212 +0000 UTC m=+1317.441671982" watchObservedRunningTime="2026-02-02 14:55:15.809172062 +0000 UTC m=+1317.453808832" Feb 02 14:55:16 crc kubenswrapper[4869]: I0202 14:55:16.716399 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:16 crc kubenswrapper[4869]: I0202 14:55:16.732922 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:17 crc kubenswrapper[4869]: I0202 14:55:17.817246 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerID="53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20" exitCode=0 Feb 02 14:55:17 crc kubenswrapper[4869]: I0202 14:55:17.817693 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20"} Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.764105 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.832076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e20726c-76b7-41eb-a27b-3deb88fcc6f9","Type":"ContainerDied","Data":"2ee7ad043782b76a75c638017ecf8eb737d1dae5d41ae89149f1f57042e858c0"} Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.832154 4869 scope.go:117] "RemoveContainer" containerID="247f9fbb81260f7e4b9f048ec56205ae09c7e9bd2ceb6943b08d41e14a1194be" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.832429 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.835885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.836068 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.836104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") pod \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\" (UID: \"4e20726c-76b7-41eb-a27b-3deb88fcc6f9\") " Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.837512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.838734 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.845280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22" (OuterVolumeSpecName: "kube-api-access-vsd22") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "kube-api-access-vsd22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.845343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts" (OuterVolumeSpecName: "scripts") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.926730 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941830 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941879 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941891 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsd22\" (UniqueName: \"kubernetes.io/projected/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-kube-api-access-vsd22\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941923 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.941932 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:18 crc kubenswrapper[4869]: I0202 14:55:18.981866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.016657 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data" (OuterVolumeSpecName: "config-data") pod "4e20726c-76b7-41eb-a27b-3deb88fcc6f9" (UID: "4e20726c-76b7-41eb-a27b-3deb88fcc6f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.044031 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.044076 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e20726c-76b7-41eb-a27b-3deb88fcc6f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.112962 4869 scope.go:117] "RemoveContainer" containerID="03cd779e4363d5fce161bf1666f6c71888f69bf2b587315589c824460fcce3ad" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.140319 4869 scope.go:117] "RemoveContainer" containerID="53b4a8c2962b7aea73fd4788872818d902f108a539a22fdbf2d2df10cd3a7f20" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.188261 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.219250 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.231154 4869 scope.go:117] "RemoveContainer" containerID="94ecbe83bb1e00d880c8166411a359ae1aa277b85c466312528d09cb9c50e294" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.244964 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245668 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245693 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245712 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245721 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245753 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: E0202 14:55:19.245769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245775 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245974 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-central-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.245990 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="sg-core" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.246009 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="proxy-httpd" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.246032 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" containerName="ceilometer-notification-agent" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.252686 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.257830 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.258028 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.258084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.262096 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.352194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.353692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.353862 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.354798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.456657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.457514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.457769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458675 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.458716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.460269 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.460786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.462135 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.462358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.466453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.478429 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.479712 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.486176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.496318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.498877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.511137 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.517545 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e20726c-76b7-41eb-a27b-3deb88fcc6f9" path="/var/lib/kubelet/pods/4e20726c-76b7-41eb-a27b-3deb88fcc6f9/volumes" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.585614 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.876885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerStarted","Data":"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.877895 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.889720 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerStarted","Data":"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.889974 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" gracePeriod=30 Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.908729 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" podStartSLOduration=6.908702478 podStartE2EDuration="6.908702478s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:19.903895668 +0000 UTC m=+1321.548532438" watchObservedRunningTime="2026-02-02 14:55:19.908702478 +0000 UTC m=+1321.553339248" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.909527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerStarted","Data":"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.909572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerStarted","Data":"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925114 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" containerID="cri-o://be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" gracePeriod=30 Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerStarted","Data":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerStarted","Data":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.925628 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" containerID="cri-o://c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" gracePeriod=30 Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.946238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerStarted","Data":"c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb"} Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.953697 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.978755587 podStartE2EDuration="7.953662659s" podCreationTimestamp="2026-02-02 14:55:12 +0000 UTC" firstStartedPulling="2026-02-02 14:55:14.091416845 +0000 UTC m=+1315.736053615" lastFinishedPulling="2026-02-02 14:55:19.066323917 +0000 UTC m=+1320.710960687" observedRunningTime="2026-02-02 14:55:19.924347274 +0000 UTC m=+1321.568984034" watchObservedRunningTime="2026-02-02 14:55:19.953662659 +0000 UTC m=+1321.598299439" Feb 02 14:55:19 crc kubenswrapper[4869]: I0202 14:55:19.987642 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.600860331 podStartE2EDuration="6.987621439s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="2026-02-02 14:55:14.688365906 +0000 UTC m=+1316.333002676" lastFinishedPulling="2026-02-02 14:55:19.075127004 +0000 UTC m=+1320.719763784" observedRunningTime="2026-02-02 14:55:19.95043186 +0000 UTC m=+1321.595068630" watchObservedRunningTime="2026-02-02 14:55:19.987621439 +0000 UTC m=+1321.632258209" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.008774 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.252845254 podStartE2EDuration="8.008742061s" podCreationTimestamp="2026-02-02 14:55:12 +0000 UTC" firstStartedPulling="2026-02-02 14:55:13.878726565 +0000 UTC m=+1315.523363335" lastFinishedPulling="2026-02-02 14:55:18.634623382 +0000 UTC m=+1320.279260142" observedRunningTime="2026-02-02 14:55:19.981839076 +0000 UTC m=+1321.626475856" watchObservedRunningTime="2026-02-02 14:55:20.008742061 +0000 UTC m=+1321.653378831" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.025071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.006903052 podStartE2EDuration="7.025042525s" podCreationTimestamp="2026-02-02 14:55:13 +0000 UTC" firstStartedPulling="2026-02-02 14:55:14.619569085 +0000 UTC m=+1316.264205865" lastFinishedPulling="2026-02-02 14:55:18.637708568 +0000 UTC m=+1320.282345338" observedRunningTime="2026-02-02 14:55:20.009256935 +0000 UTC m=+1321.653893705" watchObservedRunningTime="2026-02-02 14:55:20.025042525 +0000 UTC m=+1321.669679295" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.136661 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.950290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.958562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"08a2d8ed761534c05fe2670f151170765676bc37409dea3bba0f77b45f9d496c"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961478 4869 generic.go:334] "Generic (PLEG): container finished" podID="57e664d1-4870-4eb5-8556-4418e41299eb" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" exitCode=0 Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961533 4869 generic.go:334] "Generic (PLEG): container finished" podID="57e664d1-4870-4eb5-8556-4418e41299eb" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" exitCode=143 Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerDied","Data":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961585 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerDied","Data":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57e664d1-4870-4eb5-8556-4418e41299eb","Type":"ContainerDied","Data":"5be81fda9a826f7e54ad4ca6e6d929236a63542303c28bf9d0e22fa1ebc93458"} Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.961650 4869 scope.go:117] "RemoveContainer" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:20 crc kubenswrapper[4869]: I0202 14:55:20.962025 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.005151 4869 scope.go:117] "RemoveContainer" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.017982 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") pod \"57e664d1-4870-4eb5-8556-4418e41299eb\" (UID: \"57e664d1-4870-4eb5-8556-4418e41299eb\") " Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.018744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs" (OuterVolumeSpecName: "logs") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.020560 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57e664d1-4870-4eb5-8556-4418e41299eb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.042951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds" (OuterVolumeSpecName: "kube-api-access-8n8ds") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "kube-api-access-8n8ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.048560 4869 scope.go:117] "RemoveContainer" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.049686 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": container with ID starting with c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8 not found: ID does not exist" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.049729 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} err="failed to get container status \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": rpc error: code = NotFound desc = could not find container \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": container with ID starting with c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.049759 4869 scope.go:117] "RemoveContainer" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.050214 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": container with ID starting with be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527 not found: ID does not exist" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050241 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} err="failed to get container status \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": rpc error: code = NotFound desc = could not find container \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": container with ID starting with be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050257 4869 scope.go:117] "RemoveContainer" containerID="c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050459 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8"} err="failed to get container status \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": rpc error: code = NotFound desc = could not find container \"c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8\": container with ID starting with c726b0ee4d572c7bb6da293d1024256b26759e59b57aba037d60525a5ecc5ad8 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.050484 4869 scope.go:117] "RemoveContainer" containerID="be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.052560 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527"} err="failed to get container status \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": rpc error: code = NotFound desc = could not find container \"be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527\": container with ID starting with be30af9e4b009aa4a3f10f64fd668073f4096e31a4868939070779ae3554b527 not found: ID does not exist" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.069279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.095575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data" (OuterVolumeSpecName: "config-data") pod "57e664d1-4870-4eb5-8556-4418e41299eb" (UID: "57e664d1-4870-4eb5-8556-4418e41299eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.122955 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.123000 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e664d1-4870-4eb5-8556-4418e41299eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.123017 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8ds\" (UniqueName: \"kubernetes.io/projected/57e664d1-4870-4eb5-8556-4418e41299eb-kube-api-access-8n8ds\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.315818 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.384186 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.435719 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.436459 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" Feb 02 14:55:21 crc kubenswrapper[4869]: E0202 14:55:21.436535 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436791 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-metadata" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.436835 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" containerName="nova-metadata-log" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.438576 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.441673 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.443721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.452752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.491048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e664d1-4870-4eb5-8556-4418e41299eb" path="/var/lib/kubelet/pods/57e664d1-4870-4eb5-8556-4418e41299eb/volumes" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.534545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.534675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.534823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.535088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.535179 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.637057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.638660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.638796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.638998 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.639072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.639694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.643425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.643755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.644401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.659297 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"nova-metadata-0\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.772856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:21 crc kubenswrapper[4869]: I0202 14:55:21.997459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3"} Feb 02 14:55:22 crc kubenswrapper[4869]: I0202 14:55:22.040901 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 14:55:22 crc kubenswrapper[4869]: I0202 14:55:22.383120 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.020920 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerStarted","Data":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.021445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerStarted","Data":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.021463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerStarted","Data":"22d67239dd7b49d55db153438c6a489811a47575626ce29e18944434f226cb57"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.029158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79"} Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.059109 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.059083162 podStartE2EDuration="2.059083162s" podCreationTimestamp="2026-02-02 14:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:23.048477169 +0000 UTC m=+1324.693113939" watchObservedRunningTime="2026-02-02 14:55:23.059083162 +0000 UTC m=+1324.703719932" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.220376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.232608 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.232666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.263935 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.609724 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:23 crc kubenswrapper[4869]: I0202 14:55:23.609772 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.050127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02"} Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.085422 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.651294 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:24 crc kubenswrapper[4869]: I0202 14:55:24.651308 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.070768 4869 generic.go:334] "Generic (PLEG): container finished" podID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerID="38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17" exitCode=0 Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.070872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerDied","Data":"38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17"} Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.773778 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:26 crc kubenswrapper[4869]: I0202 14:55:26.774474 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.088455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerStarted","Data":"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2"} Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.119852 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.693992325 podStartE2EDuration="8.119827487s" podCreationTimestamp="2026-02-02 14:55:19 +0000 UTC" firstStartedPulling="2026-02-02 14:55:20.147570754 +0000 UTC m=+1321.792207524" lastFinishedPulling="2026-02-02 14:55:26.573405916 +0000 UTC m=+1328.218042686" observedRunningTime="2026-02-02 14:55:27.110515387 +0000 UTC m=+1328.755152167" watchObservedRunningTime="2026-02-02 14:55:27.119827487 +0000 UTC m=+1328.764464257" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.581431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.688619 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.689340 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.689541 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.689657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") pod \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\" (UID: \"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0\") " Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.694584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m" (OuterVolumeSpecName: "kube-api-access-7xx4m") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "kube-api-access-7xx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.705949 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts" (OuterVolumeSpecName: "scripts") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.720063 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data" (OuterVolumeSpecName: "config-data") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.726351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" (UID: "3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.791800 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xx4m\" (UniqueName: \"kubernetes.io/projected/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-kube-api-access-7xx4m\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.792359 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.792373 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:27 crc kubenswrapper[4869]: I0202 14:55:27.792382 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.103147 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2bx2t" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.103303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2bx2t" event={"ID":"3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0","Type":"ContainerDied","Data":"8a6758018e930eb35d181b72a0bf4424ef8cce214eee1037a29cee9e990a3ae0"} Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.103387 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a6758018e930eb35d181b72a0bf4424ef8cce214eee1037a29cee9e990a3ae0" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.104086 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.318744 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.319564 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" containerID="cri-o://a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.319531 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" containerID="cri-o://ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.336488 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.336921 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" containerID="cri-o://c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.356528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.357095 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" containerID="cri-o://0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.357317 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" containerID="cri-o://6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" gracePeriod=30 Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.769437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.854658 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:55:28 crc kubenswrapper[4869]: I0202 14:55:28.855039 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" containerID="cri-o://c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6" gracePeriod=10 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.059231 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.122456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123135 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") pod \"7e32e648-8194-4d43-8d61-820b72b8d1b4\" (UID: \"7e32e648-8194-4d43-8d61-820b72b8d1b4\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.123659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs" (OuterVolumeSpecName: "logs") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.124551 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e32e648-8194-4d43-8d61-820b72b8d1b4-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.133534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k" (OuterVolumeSpecName: "kube-api-access-j4q5k") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "kube-api-access-j4q5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.136686 4869 generic.go:334] "Generic (PLEG): container finished" podID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerID="b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69" exitCode=0 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.136763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerDied","Data":"b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141359 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" exitCode=0 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141392 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" exitCode=143 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerDied","Data":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerDied","Data":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141498 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e32e648-8194-4d43-8d61-820b72b8d1b4","Type":"ContainerDied","Data":"22d67239dd7b49d55db153438c6a489811a47575626ce29e18944434f226cb57"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141515 4869 scope.go:117] "RemoveContainer" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.141688 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.154660 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerID="c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6" exitCode=0 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.154769 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerDied","Data":"c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.171204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data" (OuterVolumeSpecName: "config-data") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.174867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.182049 4869 generic.go:334] "Generic (PLEG): container finished" podID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" exitCode=143 Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.183480 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerDied","Data":"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec"} Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.213823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7e32e648-8194-4d43-8d61-820b72b8d1b4" (UID: "7e32e648-8194-4d43-8d61-820b72b8d1b4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227560 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227608 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4q5k\" (UniqueName: \"kubernetes.io/projected/7e32e648-8194-4d43-8d61-820b72b8d1b4-kube-api-access-j4q5k\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227621 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.227630 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e32e648-8194-4d43-8d61-820b72b8d1b4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.315384 4869 scope.go:117] "RemoveContainer" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.356811 4869 scope.go:117] "RemoveContainer" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.357606 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": container with ID starting with 6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec not found: ID does not exist" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.357643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} err="failed to get container status \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": rpc error: code = NotFound desc = could not find container \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": container with ID starting with 6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.357670 4869 scope.go:117] "RemoveContainer" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.358238 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": container with ID starting with 0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731 not found: ID does not exist" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358259 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} err="failed to get container status \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": rpc error: code = NotFound desc = could not find container \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": container with ID starting with 0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731 not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358273 4869 scope.go:117] "RemoveContainer" containerID="6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358626 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec"} err="failed to get container status \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": rpc error: code = NotFound desc = could not find container \"6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec\": container with ID starting with 6ac8c40921a26b37aa35bca095cae24f071471ce7eac00fc6dd33582afaa7fec not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.358649 4869 scope.go:117] "RemoveContainer" containerID="0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.359011 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731"} err="failed to get container status \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": rpc error: code = NotFound desc = could not find container \"0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731\": container with ID starting with 0a24c1751ee4858de34361fc917ca857cf3ab4595a1a566b719f18656b0f8731 not found: ID does not exist" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.431812 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.523976 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.534617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.534855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.534897 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.535096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.535142 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") pod \"3c0c79bc-79ef-4876-b621-25ff976ecad2\" (UID: \"3c0c79bc-79ef-4876-b621-25ff976ecad2\") " Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.573770 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.590348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv" (OuterVolumeSpecName: "kube-api-access-q4pvv") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "kube-api-access-q4pvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.607669 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617152 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="init" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617199 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="init" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617227 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617237 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617287 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617301 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerName="nova-manage" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617308 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerName="nova-manage" Feb 02 14:55:29 crc kubenswrapper[4869]: E0202 14:55:29.617340 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617347 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.617987 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-log" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.618021 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" containerName="nova-manage" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.618038 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" containerName="nova-metadata-metadata" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.620253 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" containerName="dnsmasq-dns" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.624254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.628070 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.629762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.631432 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.638984 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.642258 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.666309 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.674456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.674628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.674901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675115 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675843 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675882 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4pvv\" (UniqueName: \"kubernetes.io/projected/3c0c79bc-79ef-4876-b621-25ff976ecad2-kube-api-access-q4pvv\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675909 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.675944 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.678191 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config" (OuterVolumeSpecName: "config") pod "3c0c79bc-79ef-4876-b621-25ff976ecad2" (UID: "3c0c79bc-79ef-4876-b621-25ff976ecad2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.778732 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c0c79bc-79ef-4876-b621-25ff976ecad2-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.779232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.784113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.785511 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.799383 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.802302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"nova-metadata-0\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " pod="openstack/nova-metadata-0" Feb 02 14:55:29 crc kubenswrapper[4869]: I0202 14:55:29.953531 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.201114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" event={"ID":"3c0c79bc-79ef-4876-b621-25ff976ecad2","Type":"ContainerDied","Data":"3aa5c96598f9d84b8ea60ab2f8542911baacbe20302c3b591676275481c40de5"} Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.201200 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-nntnx" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.201582 4869 scope.go:117] "RemoveContainer" containerID="c7f4bebc6ca091eeaa5756d4461e17a6ecfe84ca278f8fa7aada9f352039ebc6" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.235413 4869 scope.go:117] "RemoveContainer" containerID="e7c2657a3ab321678154788206bd1a322a53e101bc1e6703ecd4915c3962991f" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.263767 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.273148 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-nntnx"] Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.485116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.620726 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.717886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.717997 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.718057 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.718216 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") pod \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\" (UID: \"6c4bee65-28e6-4f62-a2b5-b4d9227c5624\") " Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.726138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx" (OuterVolumeSpecName: "kube-api-access-z8ndx") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "kube-api-access-z8ndx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.726817 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts" (OuterVolumeSpecName: "scripts") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.748254 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.758383 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data" (OuterVolumeSpecName: "config-data") pod "6c4bee65-28e6-4f62-a2b5-b4d9227c5624" (UID: "6c4bee65-28e6-4f62-a2b5-b4d9227c5624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820526 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ndx\" (UniqueName: \"kubernetes.io/projected/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-kube-api-access-z8ndx\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820607 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820624 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:30 crc kubenswrapper[4869]: I0202 14:55:30.820638 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4bee65-28e6-4f62-a2b5-b4d9227c5624-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.215541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerStarted","Data":"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.216092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerStarted","Data":"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.216110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerStarted","Data":"f2995f40ac54472f74017bd157579158e7b1849e936f0eca8f4970077675a29d"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.218906 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfr68" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.219118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfr68" event={"ID":"6c4bee65-28e6-4f62-a2b5-b4d9227c5624","Type":"ContainerDied","Data":"f3ee909b4bcfcda6fe199a0eb7bb5f83a5693cde99ca407a1e05e7fdc864bdd9"} Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.219165 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ee909b4bcfcda6fe199a0eb7bb5f83a5693cde99ca407a1e05e7fdc864bdd9" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.249120 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.249087997 podStartE2EDuration="2.249087997s" podCreationTimestamp="2026-02-02 14:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:31.241172962 +0000 UTC m=+1332.885809732" watchObservedRunningTime="2026-02-02 14:55:31.249087997 +0000 UTC m=+1332.893724767" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.277801 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 14:55:31 crc kubenswrapper[4869]: E0202 14:55:31.278427 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerName="nova-cell1-conductor-db-sync" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.278449 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerName="nova-cell1-conductor-db-sync" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.278702 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" containerName="nova-cell1-conductor-db-sync" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.279454 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.282287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.300859 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.330415 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.330519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.330674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ggg\" (UniqueName: \"kubernetes.io/projected/7ed5d945-0024-455d-a2d4-c8724693b402-kube-api-access-82ggg\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.432501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ggg\" (UniqueName: \"kubernetes.io/projected/7ed5d945-0024-455d-a2d4-c8724693b402-kube-api-access-82ggg\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.432937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.433071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.439135 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.439300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed5d945-0024-455d-a2d4-c8724693b402-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.457562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ggg\" (UniqueName: \"kubernetes.io/projected/7ed5d945-0024-455d-a2d4-c8724693b402-kube-api-access-82ggg\") pod \"nova-cell1-conductor-0\" (UID: \"7ed5d945-0024-455d-a2d4-c8724693b402\") " pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.481345 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0c79bc-79ef-4876-b621-25ff976ecad2" path="/var/lib/kubelet/pods/3c0c79bc-79ef-4876-b621-25ff976ecad2/volumes" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.483367 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e32e648-8194-4d43-8d61-820b72b8d1b4" path="/var/lib/kubelet/pods/7e32e648-8194-4d43-8d61-820b72b8d1b4/volumes" Feb 02 14:55:31 crc kubenswrapper[4869]: I0202 14:55:31.604700 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.191591 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.211214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.244226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ed5d945-0024-455d-a2d4-c8724693b402","Type":"ContainerStarted","Data":"d2d62b29a7011784afde2cc529b97e434fdf493a41bc3707e0e5c6d3927f9b46"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247603 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.247719 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") pod \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\" (UID: \"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.248041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs" (OuterVolumeSpecName: "logs") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.248251 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.253303 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9" (OuterVolumeSpecName: "kube-api-access-gfqb9") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "kube-api-access-gfqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.254184 4869 generic.go:334] "Generic (PLEG): container finished" podID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerID="c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb" exitCode=0 Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.254269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerDied","Data":"c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.257697 4869 generic.go:334] "Generic (PLEG): container finished" podID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" exitCode=0 Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258182 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerDied","Data":"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258311 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dabd5514-892f-4f35-a9ca-2bf4cde0f5f5","Type":"ContainerDied","Data":"db1ddd3bf973a708ab65254e1770c7986a6d89e4a23d720be79a3c7d4e63d3a7"} Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.258368 4869 scope.go:117] "RemoveContainer" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.298063 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data" (OuterVolumeSpecName: "config-data") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.300092 4869 scope.go:117] "RemoveContainer" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.311783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" (UID: "dabd5514-892f-4f35-a9ca-2bf4cde0f5f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.342774 4869 scope.go:117] "RemoveContainer" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.344616 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65\": container with ID starting with a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65 not found: ID does not exist" containerID="a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.344654 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65"} err="failed to get container status \"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65\": rpc error: code = NotFound desc = could not find container \"a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65\": container with ID starting with a4e76cc398b4e2453a74120a3b736088a4654854d68251b3fb2e32fdba10ea65 not found: ID does not exist" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.344683 4869 scope.go:117] "RemoveContainer" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.345167 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec\": container with ID starting with ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec not found: ID does not exist" containerID="ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.345233 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec"} err="failed to get container status \"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec\": rpc error: code = NotFound desc = could not find container \"ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec\": container with ID starting with ed5c57166ea173613c1587d542e0f58d7e7c98bcd2169ddd1ee9fffd374473ec not found: ID does not exist" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.351608 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfqb9\" (UniqueName: \"kubernetes.io/projected/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-kube-api-access-gfqb9\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.351643 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.351657 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.377904 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.452859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") pod \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.453371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") pod \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.453607 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") pod \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\" (UID: \"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2\") " Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.460960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n" (OuterVolumeSpecName: "kube-api-access-6ph7n") pod "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" (UID: "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2"). InnerVolumeSpecName "kube-api-access-6ph7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.490133 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data" (OuterVolumeSpecName: "config-data") pod "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" (UID: "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.494386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" (UID: "a7dbbd97-e28d-4cff-8b00-c68c68ca73f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.560949 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ph7n\" (UniqueName: \"kubernetes.io/projected/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-kube-api-access-6ph7n\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.561657 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.561739 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.609808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.622802 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.645715 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.646523 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646547 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.646586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" Feb 02 14:55:32 crc kubenswrapper[4869]: E0202 14:55:32.646611 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646618 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646899 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" containerName="nova-scheduler-scheduler" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646933 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-log" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.646946 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" containerName="nova-api-api" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.649706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.652504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.659142 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663553 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663586 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.663717 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.767414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.767839 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.767954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.768137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.769805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.773836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.774715 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.795629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"nova-api-0\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " pod="openstack/nova-api-0" Feb 02 14:55:32 crc kubenswrapper[4869]: I0202 14:55:32.969334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.283241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ed5d945-0024-455d-a2d4-c8724693b402","Type":"ContainerStarted","Data":"4dfa4e7c32f6380a95107b356bceeaebce3c44c96e6ee5973777cd176b675abb"} Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.283731 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.287234 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a7dbbd97-e28d-4cff-8b00-c68c68ca73f2","Type":"ContainerDied","Data":"51ac651ddd93f893e6d3273b647d0ad831e6db906a9c89298fdc003ced36fdc1"} Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.287306 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.287331 4869 scope.go:117] "RemoveContainer" containerID="c4aa68f042302c30cd40c34e3be8488a299f663066bd9291f517f1d3985e52fb" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.306834 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.306808202 podStartE2EDuration="2.306808202s" podCreationTimestamp="2026-02-02 14:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:33.30352521 +0000 UTC m=+1334.948161990" watchObservedRunningTime="2026-02-02 14:55:33.306808202 +0000 UTC m=+1334.951444972" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.352464 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.363722 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.377971 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.379483 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.388830 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.399169 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.461031 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.471851 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7dbbd97-e28d-4cff-8b00-c68c68ca73f2" path="/var/lib/kubelet/pods/a7dbbd97-e28d-4cff-8b00-c68c68ca73f2/volumes" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.472780 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabd5514-892f-4f35-a9ca-2bf4cde0f5f5" path="/var/lib/kubelet/pods/dabd5514-892f-4f35-a9ca-2bf4cde0f5f5/volumes" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.488361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.488827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.489018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.591022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.591103 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.591249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.600304 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.600662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.611273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"nova-scheduler-0\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " pod="openstack/nova-scheduler-0" Feb 02 14:55:33 crc kubenswrapper[4869]: I0202 14:55:33.714514 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.218633 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.302074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerStarted","Data":"ad2b09060cc90b2b66052da409b095c5c7bf4ff33b856487d4aab5822df918b3"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.308864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerStarted","Data":"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.308972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerStarted","Data":"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.308994 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerStarted","Data":"992e8673264eb1425686bfadfad4e661653112c95495432e701a166b56edfaa7"} Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.347492 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.347462915 podStartE2EDuration="2.347462915s" podCreationTimestamp="2026-02-02 14:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:34.334380102 +0000 UTC m=+1335.979016892" watchObservedRunningTime="2026-02-02 14:55:34.347462915 +0000 UTC m=+1335.992099685" Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.966998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:34 crc kubenswrapper[4869]: I0202 14:55:34.967419 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:55:35 crc kubenswrapper[4869]: I0202 14:55:35.329983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerStarted","Data":"38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88"} Feb 02 14:55:35 crc kubenswrapper[4869]: I0202 14:55:35.349193 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.349121575 podStartE2EDuration="2.349121575s" podCreationTimestamp="2026-02-02 14:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:35.345772012 +0000 UTC m=+1336.990408782" watchObservedRunningTime="2026-02-02 14:55:35.349121575 +0000 UTC m=+1336.993758345" Feb 02 14:55:38 crc kubenswrapper[4869]: I0202 14:55:38.716010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 14:55:39 crc kubenswrapper[4869]: I0202 14:55:39.953931 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:55:39 crc kubenswrapper[4869]: I0202 14:55:39.954010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:55:40 crc kubenswrapper[4869]: I0202 14:55:40.974262 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:40 crc kubenswrapper[4869]: I0202 14:55:40.977657 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:41 crc kubenswrapper[4869]: I0202 14:55:41.634391 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 14:55:42 crc kubenswrapper[4869]: I0202 14:55:42.970747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:42 crc kubenswrapper[4869]: I0202 14:55:42.970865 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:55:43 crc kubenswrapper[4869]: I0202 14:55:43.716138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 14:55:43 crc kubenswrapper[4869]: I0202 14:55:43.753685 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 14:55:44 crc kubenswrapper[4869]: I0202 14:55:44.073345 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:44 crc kubenswrapper[4869]: I0202 14:55:44.073345 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 14:55:44 crc kubenswrapper[4869]: I0202 14:55:44.452480 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 14:55:45 crc kubenswrapper[4869]: I0202 14:55:45.304289 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:55:45 crc kubenswrapper[4869]: I0202 14:55:45.304364 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.597400 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.966752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.967619 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.975247 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:55:49 crc kubenswrapper[4869]: I0202 14:55:49.976041 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.392864 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.486875 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1a29990-0400-4b85-86fe-2a00b5809576" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" exitCode=137 Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.486939 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.486947 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerDied","Data":"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838"} Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.488690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1a29990-0400-4b85-86fe-2a00b5809576","Type":"ContainerDied","Data":"0f50f5a7419043a9c8e4096aa4798378e9fbf6f1d58cf6115d2fbee8f617e5fe"} Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.488737 4869 scope.go:117] "RemoveContainer" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.512624 4869 scope.go:117] "RemoveContainer" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" Feb 02 14:55:50 crc kubenswrapper[4869]: E0202 14:55:50.514809 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838\": container with ID starting with 8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838 not found: ID does not exist" containerID="8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.514882 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838"} err="failed to get container status \"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838\": rpc error: code = NotFound desc = could not find container \"8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838\": container with ID starting with 8099c13c740e85ab27500a16f3edfc3a8325a6a92aa2f96ff646214e52b00838 not found: ID does not exist" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.522601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") pod \"d1a29990-0400-4b85-86fe-2a00b5809576\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.522657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") pod \"d1a29990-0400-4b85-86fe-2a00b5809576\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.522903 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") pod \"d1a29990-0400-4b85-86fe-2a00b5809576\" (UID: \"d1a29990-0400-4b85-86fe-2a00b5809576\") " Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.531687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52" (OuterVolumeSpecName: "kube-api-access-h4f52") pod "d1a29990-0400-4b85-86fe-2a00b5809576" (UID: "d1a29990-0400-4b85-86fe-2a00b5809576"). InnerVolumeSpecName "kube-api-access-h4f52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.557155 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1a29990-0400-4b85-86fe-2a00b5809576" (UID: "d1a29990-0400-4b85-86fe-2a00b5809576"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.565054 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data" (OuterVolumeSpecName: "config-data") pod "d1a29990-0400-4b85-86fe-2a00b5809576" (UID: "d1a29990-0400-4b85-86fe-2a00b5809576"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.626193 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.626623 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4f52\" (UniqueName: \"kubernetes.io/projected/d1a29990-0400-4b85-86fe-2a00b5809576-kube-api-access-h4f52\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.626639 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a29990-0400-4b85-86fe-2a00b5809576-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.823372 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.834090 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.905025 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:50 crc kubenswrapper[4869]: E0202 14:55:50.906109 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.906128 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.906481 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.907523 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.918246 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.918533 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.919949 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 14:55:50 crc kubenswrapper[4869]: I0202 14:55:50.938205 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037091 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmkm\" (UniqueName: \"kubernetes.io/projected/127a427f-66a5-4d07-ac48-aea0da95d425-kube-api-access-pdmkm\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037141 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037245 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.037328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.139938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.139985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmkm\" (UniqueName: \"kubernetes.io/projected/127a427f-66a5-4d07-ac48-aea0da95d425-kube-api-access-pdmkm\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.140023 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.140054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.140112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.146629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.146831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.151602 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.151631 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/127a427f-66a5-4d07-ac48-aea0da95d425-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.172116 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmkm\" (UniqueName: \"kubernetes.io/projected/127a427f-66a5-4d07-ac48-aea0da95d425-kube-api-access-pdmkm\") pod \"nova-cell1-novncproxy-0\" (UID: \"127a427f-66a5-4d07-ac48-aea0da95d425\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.245441 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.474794 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a29990-0400-4b85-86fe-2a00b5809576" path="/var/lib/kubelet/pods/d1a29990-0400-4b85-86fe-2a00b5809576/volumes" Feb 02 14:55:51 crc kubenswrapper[4869]: I0202 14:55:51.716890 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 14:55:51 crc kubenswrapper[4869]: W0202 14:55:51.720114 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127a427f_66a5_4d07_ac48_aea0da95d425.slice/crio-3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67 WatchSource:0}: Error finding container 3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67: Status 404 returned error can't find the container with id 3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67 Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.513791 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"127a427f-66a5-4d07-ac48-aea0da95d425","Type":"ContainerStarted","Data":"57f86155facf843e6551718f2f10381aae1b22f7d747e0f4415087f5a3853807"} Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.514694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"127a427f-66a5-4d07-ac48-aea0da95d425","Type":"ContainerStarted","Data":"3e986d38a2e64afa01833281a0c5f13c686f075f7adce6049ac539a324116c67"} Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.548493 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.548460409 podStartE2EDuration="2.548460409s" podCreationTimestamp="2026-02-02 14:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:52.539447946 +0000 UTC m=+1354.184084736" watchObservedRunningTime="2026-02-02 14:55:52.548460409 +0000 UTC m=+1354.193097179" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.975072 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.977593 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.978924 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:55:52 crc kubenswrapper[4869]: I0202 14:55:52.983085 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.525881 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.530338 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.722772 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.724848 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.746998 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804448 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.804596 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910106 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.910447 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.912889 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.913984 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.914224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.915007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:53 crc kubenswrapper[4869]: I0202 14:55:53.938230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"dnsmasq-dns-68d4b6d797-44fwt\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:54 crc kubenswrapper[4869]: I0202 14:55:54.060677 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:54 crc kubenswrapper[4869]: I0202 14:55:54.814322 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:55:55 crc kubenswrapper[4869]: I0202 14:55:55.570716 4869 generic.go:334] "Generic (PLEG): container finished" podID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerID="8cf856a4df374f3980cbc2ddc8eb1618f3c5e7b2fc6a969f06245cd19d267eb6" exitCode=0 Feb 02 14:55:55 crc kubenswrapper[4869]: I0202 14:55:55.570806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerDied","Data":"8cf856a4df374f3980cbc2ddc8eb1618f3c5e7b2fc6a969f06245cd19d267eb6"} Feb 02 14:55:55 crc kubenswrapper[4869]: I0202 14:55:55.571479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerStarted","Data":"b0a192cf90b2c34b440565bf71d8167abd947c406c2ba5f06b41ea7ba562f653"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.245857 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392386 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392701 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" containerID="cri-o://e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392804 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" containerID="cri-o://daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.392824 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" containerID="cri-o://a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.393385 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" containerID="cri-o://33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.587144 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerStarted","Data":"498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.588276 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596454 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" exitCode=0 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596496 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" exitCode=2 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.596555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02"} Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.630106 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" podStartSLOduration=3.630072902 podStartE2EDuration="3.630072902s" podCreationTimestamp="2026-02-02 14:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:55:56.617700835 +0000 UTC m=+1358.262337645" watchObservedRunningTime="2026-02-02 14:55:56.630072902 +0000 UTC m=+1358.274709672" Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.640966 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.641216 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" containerID="cri-o://5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" gracePeriod=30 Feb 02 14:55:56 crc kubenswrapper[4869]: I0202 14:55:56.641378 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" containerID="cri-o://5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" gracePeriod=30 Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.609210 4869 generic.go:334] "Generic (PLEG): container finished" podID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" exitCode=143 Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.609297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerDied","Data":"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194"} Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.612192 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" exitCode=0 Feb 02 14:55:57 crc kubenswrapper[4869]: I0202 14:55:57.612258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.258194 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.269753 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376417 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376530 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376594 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376731 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376773 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376841 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") pod \"8f07b304-b006-4eff-abbe-632939ffb20c\" (UID: \"8f07b304-b006-4eff-abbe-632939ffb20c\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.376990 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.377020 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.377217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") pod \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\" (UID: \"4b807d4b-0c84-4300-bdc8-997bd3fc4293\") " Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.377862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs" (OuterVolumeSpecName: "logs") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.378165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.379014 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.379042 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b807d4b-0c84-4300-bdc8-997bd3fc4293-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.379053 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f07b304-b006-4eff-abbe-632939ffb20c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.385300 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv" (OuterVolumeSpecName: "kube-api-access-f2lcv") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "kube-api-access-f2lcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.387100 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg" (OuterVolumeSpecName: "kube-api-access-ts7bg") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "kube-api-access-ts7bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.387293 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts" (OuterVolumeSpecName: "scripts") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.422284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data" (OuterVolumeSpecName: "config-data") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.455307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b807d4b-0c84-4300-bdc8-997bd3fc4293" (UID: "4b807d4b-0c84-4300-bdc8-997bd3fc4293"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.463131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.466953 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481695 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lcv\" (UniqueName: \"kubernetes.io/projected/4b807d4b-0c84-4300-bdc8-997bd3fc4293-kube-api-access-f2lcv\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481748 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481759 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts7bg\" (UniqueName: \"kubernetes.io/projected/8f07b304-b006-4eff-abbe-632939ffb20c-kube-api-access-ts7bg\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481773 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b807d4b-0c84-4300-bdc8-997bd3fc4293-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481784 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481794 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.481805 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.484148 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.540476 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data" (OuterVolumeSpecName: "config-data") pod "8f07b304-b006-4eff-abbe-632939ffb20c" (UID: "8f07b304-b006-4eff-abbe-632939ffb20c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.583967 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.584436 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f07b304-b006-4eff-abbe-632939ffb20c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643694 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f07b304-b006-4eff-abbe-632939ffb20c" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" exitCode=0 Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f07b304-b006-4eff-abbe-632939ffb20c","Type":"ContainerDied","Data":"08a2d8ed761534c05fe2670f151170765676bc37409dea3bba0f77b45f9d496c"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643845 4869 scope.go:117] "RemoveContainer" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.643844 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647216 4869 generic.go:334] "Generic (PLEG): container finished" podID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" exitCode=0 Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerDied","Data":"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b807d4b-0c84-4300-bdc8-997bd3fc4293","Type":"ContainerDied","Data":"992e8673264eb1425686bfadfad4e661653112c95495432e701a166b56edfaa7"} Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.647393 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.671213 4869 scope.go:117] "RemoveContainer" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.705997 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.709339 4869 scope.go:117] "RemoveContainer" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.734615 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.741053 4869 scope.go:117] "RemoveContainer" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.788791 4869 scope.go:117] "RemoveContainer" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.789242 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.790169 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2\": container with ID starting with 33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2 not found: ID does not exist" containerID="33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.790224 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2"} err="failed to get container status \"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2\": rpc error: code = NotFound desc = could not find container \"33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2\": container with ID starting with 33516ab90370f82f3f1b862e93f675eb23e1f4a68652cb1ea7034a78205e86d2 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.790262 4869 scope.go:117] "RemoveContainer" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.790841 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02\": container with ID starting with a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02 not found: ID does not exist" containerID="a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.791263 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02"} err="failed to get container status \"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02\": rpc error: code = NotFound desc = could not find container \"a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02\": container with ID starting with a7f82ec46f4b3414955c14dd18072c1a8fa91f0bf84a78296b95f12219b9aa02 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.791524 4869 scope.go:117] "RemoveContainer" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.791962 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79\": container with ID starting with daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79 not found: ID does not exist" containerID="daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.792100 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79"} err="failed to get container status \"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79\": rpc error: code = NotFound desc = could not find container \"daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79\": container with ID starting with daf58c58189768c1ca96e3bfd4904f6f546c909033701fbcd53ecb60a59bba79 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.792206 4869 scope.go:117] "RemoveContainer" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.793302 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3\": container with ID starting with e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3 not found: ID does not exist" containerID="e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.793340 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3"} err="failed to get container status \"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3\": rpc error: code = NotFound desc = could not find container \"e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3\": container with ID starting with e2787d0262fd63ca23a98278e60b43d07a5dc551ecd062097aec8ff828d891e3 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.793363 4869 scope.go:117] "RemoveContainer" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.809207 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.816565 4869 scope.go:117] "RemoveContainer" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823078 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823496 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823517 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823530 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823536 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823552 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823558 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823569 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823577 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823593 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.823607 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823613 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823829 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-central-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="sg-core" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823856 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="proxy-httpd" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823863 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-log" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823870 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" containerName="ceilometer-notification-agent" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.823879 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" containerName="nova-api-api" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.825611 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.831864 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.832217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.832960 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.860543 4869 scope.go:117] "RemoveContainer" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.861048 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.862558 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc\": container with ID starting with 5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc not found: ID does not exist" containerID="5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.862594 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc"} err="failed to get container status \"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc\": rpc error: code = NotFound desc = could not find container \"5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc\": container with ID starting with 5fcef6cc857f96ae83527cb19e8132201b902495e0be3601e0e8d30b10e2d4fc not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.862623 4869 scope.go:117] "RemoveContainer" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" Feb 02 14:56:00 crc kubenswrapper[4869]: E0202 14:56:00.863578 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194\": container with ID starting with 5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194 not found: ID does not exist" containerID="5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.863605 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194"} err="failed to get container status \"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194\": rpc error: code = NotFound desc = could not find container \"5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194\": container with ID starting with 5969c664680e1447dd4694aad25d3e010698a976a6dc39ff4d3832bae7cd6194 not found: ID does not exist" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.871856 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.873931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.877515 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.877953 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.878130 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.890572 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904505 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.904578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905567 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905871 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:00 crc kubenswrapper[4869]: I0202 14:56:00.905970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008668 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008695 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008716 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008739 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.008776 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009166 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009387 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.009526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.010260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.015110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.015169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.017164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.017176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.018158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.028022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"ceilometer-0\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.112468 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.113302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.117277 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.117550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.121669 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.124365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.135832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"nova-api-0\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.148579 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.251006 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.251123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.295431 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.474601 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b807d4b-0c84-4300-bdc8-997bd3fc4293" path="/var/lib/kubelet/pods/4b807d4b-0c84-4300-bdc8-997bd3fc4293/volumes" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.475985 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f07b304-b006-4eff-abbe-632939ffb20c" path="/var/lib/kubelet/pods/8f07b304-b006-4eff-abbe-632939ffb20c/volumes" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.683670 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.714272 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.830312 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:01 crc kubenswrapper[4869]: W0202 14:56:01.833008 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc96f1eaa_fe0c_4111_9ee0_21d067b0d1aa.slice/crio-a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26 WatchSource:0}: Error finding container a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26: Status 404 returned error can't find the container with id a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26 Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.931508 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.933318 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.936204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.936410 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 14:56:01 crc kubenswrapper[4869]: I0202 14:56:01.972207 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.050640 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.050709 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.050758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.051397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.155737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.157943 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.158076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.158198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.164568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.164610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.164962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.181953 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"nova-cell1-cell-mapping-4296x\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.293780 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.683163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.683670 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"0796932bd84ec076e7335a7406319502760ed8351d5e889f11c65dc928821a28"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.688443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerStarted","Data":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.688517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerStarted","Data":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.688535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerStarted","Data":"a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26"} Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.726126 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7260985399999997 podStartE2EDuration="2.72609854s" podCreationTimestamp="2026-02-02 14:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:02.713590653 +0000 UTC m=+1364.358227433" watchObservedRunningTime="2026-02-02 14:56:02.72609854 +0000 UTC m=+1364.370735310" Feb 02 14:56:02 crc kubenswrapper[4869]: I0202 14:56:02.879146 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 14:56:02 crc kubenswrapper[4869]: W0202 14:56:02.895612 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e3908c6_0f4b_4b27_8f07_9851e54d845b.slice/crio-def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571 WatchSource:0}: Error finding container def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571: Status 404 returned error can't find the container with id def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571 Feb 02 14:56:03 crc kubenswrapper[4869]: I0202 14:56:03.722206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc"} Feb 02 14:56:03 crc kubenswrapper[4869]: I0202 14:56:03.729083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerStarted","Data":"b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f"} Feb 02 14:56:03 crc kubenswrapper[4869]: I0202 14:56:03.729154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerStarted","Data":"def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.062070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.096365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4296x" podStartSLOduration=3.096332299 podStartE2EDuration="3.096332299s" podCreationTimestamp="2026-02-02 14:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:03.752995213 +0000 UTC m=+1365.397631993" watchObservedRunningTime="2026-02-02 14:56:04.096332299 +0000 UTC m=+1365.740969079" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.162750 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.163193 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" containerID="cri-o://3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" gracePeriod=10 Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.748307 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749624 4869 generic.go:334] "Generic (PLEG): container finished" podID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" exitCode=0 Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerDied","Data":"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" event={"ID":"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7","Type":"ContainerDied","Data":"9e1c8170bbe27458021229751e306804c8d9eb43efb07049fd479764776f395c"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.749787 4869 scope.go:117] "RemoveContainer" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.766059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4"} Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.779629 4869 scope.go:117] "RemoveContainer" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.834242 4869 scope.go:117] "RemoveContainer" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" Feb 02 14:56:04 crc kubenswrapper[4869]: E0202 14:56:04.834721 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf\": container with ID starting with 3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf not found: ID does not exist" containerID="3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.834788 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf"} err="failed to get container status \"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf\": rpc error: code = NotFound desc = could not find container \"3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf\": container with ID starting with 3ea128909ba9d9a4326263aeba230a55b7ea22d3b3de6b00d390827822601eaf not found: ID does not exist" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.834852 4869 scope.go:117] "RemoveContainer" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" Feb 02 14:56:04 crc kubenswrapper[4869]: E0202 14:56:04.835290 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c\": container with ID starting with 49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c not found: ID does not exist" containerID="49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.835330 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c"} err="failed to get container status \"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c\": rpc error: code = NotFound desc = could not find container \"49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c\": container with ID starting with 49ad03188d401a973c78c2c17e83bc8b9e6641ba125f5b1f1bb18dfb5620d63c not found: ID does not exist" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.872669 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.872820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.872900 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.873088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.873155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") pod \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\" (UID: \"cf7f6efe-3991-4ab2-aab5-65a1ca71eda7\") " Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.884766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4" (OuterVolumeSpecName: "kube-api-access-wkrl4") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "kube-api-access-wkrl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.929793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.938667 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config" (OuterVolumeSpecName: "config") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.938723 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.954007 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" (UID: "cf7f6efe-3991-4ab2-aab5-65a1ca71eda7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.975943 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkrl4\" (UniqueName: \"kubernetes.io/projected/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-kube-api-access-wkrl4\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.975990 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.976004 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.976019 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:04 crc kubenswrapper[4869]: I0202 14:56:04.976032 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:05 crc kubenswrapper[4869]: I0202 14:56:05.776598 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-sfvmp" Feb 02 14:56:05 crc kubenswrapper[4869]: I0202 14:56:05.815681 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:56:05 crc kubenswrapper[4869]: I0202 14:56:05.826210 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-sfvmp"] Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.476357 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" path="/var/lib/kubelet/pods/cf7f6efe-3991-4ab2-aab5-65a1ca71eda7/volumes" Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.803804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerStarted","Data":"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e"} Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.805225 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 14:56:07 crc kubenswrapper[4869]: I0202 14:56:07.838744 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.927509112 podStartE2EDuration="7.838710471s" podCreationTimestamp="2026-02-02 14:56:00 +0000 UTC" firstStartedPulling="2026-02-02 14:56:01.713699761 +0000 UTC m=+1363.358336531" lastFinishedPulling="2026-02-02 14:56:06.62490112 +0000 UTC m=+1368.269537890" observedRunningTime="2026-02-02 14:56:07.835129382 +0000 UTC m=+1369.479766172" watchObservedRunningTime="2026-02-02 14:56:07.838710471 +0000 UTC m=+1369.483347251" Feb 02 14:56:08 crc kubenswrapper[4869]: I0202 14:56:08.816130 4869 generic.go:334] "Generic (PLEG): container finished" podID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerID="b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f" exitCode=0 Feb 02 14:56:08 crc kubenswrapper[4869]: I0202 14:56:08.816239 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerDied","Data":"b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f"} Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.203252 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.307902 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.308294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.308502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.308560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") pod \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\" (UID: \"3e3908c6-0f4b-4b27-8f07-9851e54d845b\") " Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.327838 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg" (OuterVolumeSpecName: "kube-api-access-lx9cg") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "kube-api-access-lx9cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.329027 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts" (OuterVolumeSpecName: "scripts") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.342658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.343371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data" (OuterVolumeSpecName: "config-data") pod "3e3908c6-0f4b-4b27-8f07-9851e54d845b" (UID: "3e3908c6-0f4b-4b27-8f07-9851e54d845b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413864 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413922 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx9cg\" (UniqueName: \"kubernetes.io/projected/3e3908c6-0f4b-4b27-8f07-9851e54d845b-kube-api-access-lx9cg\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413936 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.413946 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e3908c6-0f4b-4b27-8f07-9851e54d845b-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.839212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4296x" event={"ID":"3e3908c6-0f4b-4b27-8f07-9851e54d845b","Type":"ContainerDied","Data":"def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571"} Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.839636 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def4add08cf14df2a2841536b20a71b7a77417837683295f9114df42d15b6571" Feb 02 14:56:10 crc kubenswrapper[4869]: I0202 14:56:10.839776 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4296x" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.039723 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.040068 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" containerID="cri-o://c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.040293 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" containerID="cri-o://bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.055972 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.056293 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" containerID="cri-o://38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.117448 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.118209 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" containerID="cri-o://00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.118451 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" containerID="cri-o://060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" gracePeriod=30 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.732177 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.767469 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") pod \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\" (UID: \"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa\") " Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.768026 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs" (OuterVolumeSpecName: "logs") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.777426 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq" (OuterVolumeSpecName: "kube-api-access-qnprq") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "kube-api-access-qnprq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.803381 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.811858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data" (OuterVolumeSpecName: "config-data") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864106 4869 generic.go:334] "Generic (PLEG): container finished" podID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" exitCode=0 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864138 4869 generic.go:334] "Generic (PLEG): container finished" podID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" exitCode=143 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerDied","Data":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerDied","Data":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa","Type":"ContainerDied","Data":"a1212327d4106b15e75c0c9d7f021e2af767170d2731f1ddfe998b80b4920a26"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864265 4869 scope.go:117] "RemoveContainer" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.864431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869211 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869648 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnprq\" (UniqueName: \"kubernetes.io/projected/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-kube-api-access-qnprq\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869752 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.869839 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.870446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.871293 4869 generic.go:334] "Generic (PLEG): container finished" podID="19de8d9b-333e-4132-9b20-35258b84e935" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" exitCode=143 Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.871371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerDied","Data":"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd"} Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.885025 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" (UID: "c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.890046 4869 scope.go:117] "RemoveContainer" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.915300 4869 scope.go:117] "RemoveContainer" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: E0202 14:56:11.917163 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": container with ID starting with bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49 not found: ID does not exist" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917206 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} err="failed to get container status \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": rpc error: code = NotFound desc = could not find container \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": container with ID starting with bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917239 4869 scope.go:117] "RemoveContainer" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: E0202 14:56:11.917501 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": container with ID starting with c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2 not found: ID does not exist" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917525 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} err="failed to get container status \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": rpc error: code = NotFound desc = could not find container \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": container with ID starting with c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917543 4869 scope.go:117] "RemoveContainer" containerID="bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917785 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49"} err="failed to get container status \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": rpc error: code = NotFound desc = could not find container \"bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49\": container with ID starting with bfa21e08dec5cc9eb9387029c5efb5a1cc58f49cd8841bba96a70017afe82e49 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.917806 4869 scope.go:117] "RemoveContainer" containerID="c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.918221 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2"} err="failed to get container status \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": rpc error: code = NotFound desc = could not find container \"c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2\": container with ID starting with c2b2084541632e2ca6bab5516c312fa5452eff44fcf89a28327f5c81ae26dde2 not found: ID does not exist" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.970641 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:11 crc kubenswrapper[4869]: I0202 14:56:11.970677 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.240611 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.251633 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278022 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278571 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278596 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278615 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278624 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278636 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278650 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278671 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerName="nova-manage" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278678 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerName="nova-manage" Feb 02 14:56:12 crc kubenswrapper[4869]: E0202 14:56:12.278775 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="init" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.278786 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="init" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279018 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" containerName="nova-manage" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279042 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-api" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279057 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7f6efe-3991-4ab2-aab5-65a1ca71eda7" containerName="dnsmasq-dns" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.279081 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" containerName="nova-api-log" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.296169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.298964 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.299441 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.305783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.309655 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378192 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-config-data\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-logs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378257 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvnw\" (UniqueName: \"kubernetes.io/projected/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-kube-api-access-mgvnw\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.378555 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-config-data\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-logs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvnw\" (UniqueName: \"kubernetes.io/projected/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-kube-api-access-mgvnw\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.480701 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.481317 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-logs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.484965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.485077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.487561 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.488138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-config-data\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.499482 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvnw\" (UniqueName: \"kubernetes.io/projected/6f2e77f7-6ccb-4992-8292-e69f277dc8f2-kube-api-access-mgvnw\") pod \"nova-api-0\" (UID: \"6f2e77f7-6ccb-4992-8292-e69f277dc8f2\") " pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.631538 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.898849 4869 generic.go:334] "Generic (PLEG): container finished" podID="719e20f4-473b-4859-8730-d15fe8c662aa" containerID="38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88" exitCode=0 Feb 02 14:56:12 crc kubenswrapper[4869]: I0202 14:56:12.899135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerDied","Data":"38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.131816 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 14:56:13 crc kubenswrapper[4869]: W0202 14:56:13.136662 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f2e77f7_6ccb_4992_8292_e69f277dc8f2.slice/crio-f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878 WatchSource:0}: Error finding container f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878: Status 404 returned error can't find the container with id f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878 Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.280627 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.404200 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") pod \"719e20f4-473b-4859-8730-d15fe8c662aa\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.404270 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") pod \"719e20f4-473b-4859-8730-d15fe8c662aa\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.404358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") pod \"719e20f4-473b-4859-8730-d15fe8c662aa\" (UID: \"719e20f4-473b-4859-8730-d15fe8c662aa\") " Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.408996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p" (OuterVolumeSpecName: "kube-api-access-d7t4p") pod "719e20f4-473b-4859-8730-d15fe8c662aa" (UID: "719e20f4-473b-4859-8730-d15fe8c662aa"). InnerVolumeSpecName "kube-api-access-d7t4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.438986 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "719e20f4-473b-4859-8730-d15fe8c662aa" (UID: "719e20f4-473b-4859-8730-d15fe8c662aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.442384 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data" (OuterVolumeSpecName: "config-data") pod "719e20f4-473b-4859-8730-d15fe8c662aa" (UID: "719e20f4-473b-4859-8730-d15fe8c662aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.506601 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.506638 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7t4p\" (UniqueName: \"kubernetes.io/projected/719e20f4-473b-4859-8730-d15fe8c662aa-kube-api-access-d7t4p\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.506653 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719e20f4-473b-4859-8730-d15fe8c662aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.511424 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa" path="/var/lib/kubelet/pods/c96f1eaa-fe0c-4111-9ee0-21d067b0d1aa/volumes" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.920548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f2e77f7-6ccb-4992-8292-e69f277dc8f2","Type":"ContainerStarted","Data":"3e7dd1a52bd7442cf06499e0562d1c21586e6fd515cec10ecef1c409c3e41eeb"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.920630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f2e77f7-6ccb-4992-8292-e69f277dc8f2","Type":"ContainerStarted","Data":"00d4cc7404af22df7fd841747b98d88cef413f17a55995e3c395a6791d71c4d5"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.920642 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6f2e77f7-6ccb-4992-8292-e69f277dc8f2","Type":"ContainerStarted","Data":"f35084fee3f102ede55274efd398f7bd7d694b304fef98e53faf654a765ec878"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.925288 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"719e20f4-473b-4859-8730-d15fe8c662aa","Type":"ContainerDied","Data":"ad2b09060cc90b2b66052da409b095c5c7bf4ff33b856487d4aab5822df918b3"} Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.925357 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.925403 4869 scope.go:117] "RemoveContainer" containerID="38f1149a86606285d1234ece49328822c5d3b92a782675e670f6ae4acb165b88" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.950440 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.950417858 podStartE2EDuration="1.950417858s" podCreationTimestamp="2026-02-02 14:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:13.941832976 +0000 UTC m=+1375.586469766" watchObservedRunningTime="2026-02-02 14:56:13.950417858 +0000 UTC m=+1375.595054628" Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.973901 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:13 crc kubenswrapper[4869]: I0202 14:56:13.996699 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.006357 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: E0202 14:56:14.006978 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.006999 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.007194 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" containerName="nova-scheduler-scheduler" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.008102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.013471 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.022090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-config-data\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.022147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhv4\" (UniqueName: \"kubernetes.io/projected/46796adc-7f57-405f-bb4c-a2ccb79153f2-kube-api-access-grhv4\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.022226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.026950 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.124599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.125101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-config-data\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.125228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhv4\" (UniqueName: \"kubernetes.io/projected/46796adc-7f57-405f-bb4c-a2ccb79153f2-kube-api-access-grhv4\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.132432 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.132847 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46796adc-7f57-405f-bb4c-a2ccb79153f2-config-data\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.145747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhv4\" (UniqueName: \"kubernetes.io/projected/46796adc-7f57-405f-bb4c-a2ccb79153f2-kube-api-access-grhv4\") pod \"nova-scheduler-0\" (UID: \"46796adc-7f57-405f-bb4c-a2ccb79153f2\") " pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.335447 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.780449 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.844837 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.844949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.845049 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.845076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.845235 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") pod \"19de8d9b-333e-4132-9b20-35258b84e935\" (UID: \"19de8d9b-333e-4132-9b20-35258b84e935\") " Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.846364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs" (OuterVolumeSpecName: "logs") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.863578 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7" (OuterVolumeSpecName: "kube-api-access-lvfz7") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "kube-api-access-lvfz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.888333 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.894161 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data" (OuterVolumeSpecName: "config-data") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.906973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947795 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947844 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947860 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvfz7\" (UniqueName: \"kubernetes.io/projected/19de8d9b-333e-4132-9b20-35258b84e935-kube-api-access-lvfz7\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.947874 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19de8d9b-333e-4132-9b20-35258b84e935-logs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949009 4869 generic.go:334] "Generic (PLEG): container finished" podID="19de8d9b-333e-4132-9b20-35258b84e935" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" exitCode=0 Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerDied","Data":"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f"} Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"19de8d9b-333e-4132-9b20-35258b84e935","Type":"ContainerDied","Data":"f2995f40ac54472f74017bd157579158e7b1849e936f0eca8f4970077675a29d"} Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949187 4869 scope.go:117] "RemoveContainer" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.949222 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.956303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46796adc-7f57-405f-bb4c-a2ccb79153f2","Type":"ContainerStarted","Data":"b061032e19eaddc126231c75da55fdb1cc47af650877d0736bd1df81a7b8991e"} Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.971001 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "19de8d9b-333e-4132-9b20-35258b84e935" (UID: "19de8d9b-333e-4132-9b20-35258b84e935"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:14 crc kubenswrapper[4869]: I0202 14:56:14.985288 4869 scope.go:117] "RemoveContainer" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.021649 4869 scope.go:117] "RemoveContainer" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.022287 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f\": container with ID starting with 060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f not found: ID does not exist" containerID="060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.022325 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f"} err="failed to get container status \"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f\": rpc error: code = NotFound desc = could not find container \"060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f\": container with ID starting with 060aad4cb7bd20d66e3bb6a3bffbf9529c2f534c73ec22cfee55626be0ab9f5f not found: ID does not exist" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.022352 4869 scope.go:117] "RemoveContainer" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.025358 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd\": container with ID starting with 00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd not found: ID does not exist" containerID="00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.025393 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd"} err="failed to get container status \"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd\": rpc error: code = NotFound desc = could not find container \"00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd\": container with ID starting with 00efd1b34f4b48246ed6c6ec10e8a78a42c1d2906001c2de6abc1b719a97ebcd not found: ID does not exist" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.048787 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/19de8d9b-333e-4132-9b20-35258b84e935-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.286675 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.297100 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.304992 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.305308 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.305382 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.306615 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.306693 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666" gracePeriod=600 Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314014 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.314566 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314597 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" Feb 02 14:56:15 crc kubenswrapper[4869]: E0202 14:56:15.314654 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314664 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314887 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-log" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.314947 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="19de8d9b-333e-4132-9b20-35258b84e935" containerName="nova-metadata-metadata" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.316011 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.324541 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.325087 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.327234 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355412 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c133ea7-0c2e-4338-a24b-319409d4e41a-logs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwf89\" (UniqueName: \"kubernetes.io/projected/0c133ea7-0c2e-4338-a24b-319409d4e41a-kube-api-access-xwf89\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355588 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-config-data\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.355611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-config-data\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c133ea7-0c2e-4338-a24b-319409d4e41a-logs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.457964 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwf89\" (UniqueName: \"kubernetes.io/projected/0c133ea7-0c2e-4338-a24b-319409d4e41a-kube-api-access-xwf89\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.458451 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c133ea7-0c2e-4338-a24b-319409d4e41a-logs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.465269 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.467240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.476577 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c133ea7-0c2e-4338-a24b-319409d4e41a-config-data\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.478522 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19de8d9b-333e-4132-9b20-35258b84e935" path="/var/lib/kubelet/pods/19de8d9b-333e-4132-9b20-35258b84e935/volumes" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.479345 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719e20f4-473b-4859-8730-d15fe8c662aa" path="/var/lib/kubelet/pods/719e20f4-473b-4859-8730-d15fe8c662aa/volumes" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.481434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwf89\" (UniqueName: \"kubernetes.io/projected/0c133ea7-0c2e-4338-a24b-319409d4e41a-kube-api-access-xwf89\") pod \"nova-metadata-0\" (UID: \"0c133ea7-0c2e-4338-a24b-319409d4e41a\") " pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.664673 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.993821 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666" exitCode=0 Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.994380 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666"} Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.994419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9"} Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.994444 4869 scope.go:117] "RemoveContainer" containerID="1bef5335419b86b163b34c34d864f100562e541355ca4d13fea32016fe7045a5" Feb 02 14:56:15 crc kubenswrapper[4869]: I0202 14:56:15.997330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46796adc-7f57-405f-bb4c-a2ccb79153f2","Type":"ContainerStarted","Data":"74cce2da88f222488003067f7b34f7c51117b43c17f51b4d3fe102d888d2fa77"} Feb 02 14:56:16 crc kubenswrapper[4869]: I0202 14:56:16.045587 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.045564211 podStartE2EDuration="3.045564211s" podCreationTimestamp="2026-02-02 14:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:16.044455694 +0000 UTC m=+1377.689092484" watchObservedRunningTime="2026-02-02 14:56:16.045564211 +0000 UTC m=+1377.690200981" Feb 02 14:56:16 crc kubenswrapper[4869]: I0202 14:56:16.158555 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.016173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c133ea7-0c2e-4338-a24b-319409d4e41a","Type":"ContainerStarted","Data":"b0cb1b2d299f5b885b8ebda4139c41e9e524d39f49517385d35d41463db733a7"} Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.016898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c133ea7-0c2e-4338-a24b-319409d4e41a","Type":"ContainerStarted","Data":"299487cfb600a0ff9459e9a0b6428d7aa8dc8703ed64dc09b0c82b39fdafed20"} Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.016939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c133ea7-0c2e-4338-a24b-319409d4e41a","Type":"ContainerStarted","Data":"a0fe56cdaddddff2a1fd1474f11a5990f8338dc794e8b6342b28cfaa1f1b8386"} Feb 02 14:56:17 crc kubenswrapper[4869]: I0202 14:56:17.049560 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.049530457 podStartE2EDuration="2.049530457s" podCreationTimestamp="2026-02-02 14:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:56:17.045373135 +0000 UTC m=+1378.690009915" watchObservedRunningTime="2026-02-02 14:56:17.049530457 +0000 UTC m=+1378.694167227" Feb 02 14:56:19 crc kubenswrapper[4869]: I0202 14:56:19.336511 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 14:56:20 crc kubenswrapper[4869]: I0202 14:56:20.665840 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:56:20 crc kubenswrapper[4869]: I0202 14:56:20.666569 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 14:56:22 crc kubenswrapper[4869]: I0202 14:56:22.632002 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:56:22 crc kubenswrapper[4869]: I0202 14:56:22.632076 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 14:56:23 crc kubenswrapper[4869]: I0202 14:56:23.648151 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6f2e77f7-6ccb-4992-8292-e69f277dc8f2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:23 crc kubenswrapper[4869]: I0202 14:56:23.648144 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6f2e77f7-6ccb-4992-8292-e69f277dc8f2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:24 crc kubenswrapper[4869]: I0202 14:56:24.336511 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 14:56:24 crc kubenswrapper[4869]: I0202 14:56:24.366258 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 14:56:25 crc kubenswrapper[4869]: I0202 14:56:25.124559 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 14:56:25 crc kubenswrapper[4869]: I0202 14:56:25.665800 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:56:25 crc kubenswrapper[4869]: I0202 14:56:25.665886 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 14:56:26 crc kubenswrapper[4869]: I0202 14:56:26.680374 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c133ea7-0c2e-4338-a24b-319409d4e41a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:26 crc kubenswrapper[4869]: I0202 14:56:26.680382 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c133ea7-0c2e-4338-a24b-319409d4e41a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 14:56:31 crc kubenswrapper[4869]: I0202 14:56:31.181747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.639495 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.641029 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.641508 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 14:56:32 crc kubenswrapper[4869]: I0202 14:56:32.653421 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:56:33 crc kubenswrapper[4869]: I0202 14:56:33.193643 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 14:56:33 crc kubenswrapper[4869]: I0202 14:56:33.204617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.672054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.672149 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.677990 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:56:35 crc kubenswrapper[4869]: I0202 14:56:35.680802 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 14:56:44 crc kubenswrapper[4869]: I0202 14:56:44.386976 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:46 crc kubenswrapper[4869]: I0202 14:56:46.950214 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:56:48 crc kubenswrapper[4869]: I0202 14:56:48.905317 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:56:48 crc kubenswrapper[4869]: I0202 14:56:48.908166 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:48 crc kubenswrapper[4869]: I0202 14:56:48.942379 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.054783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.055280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.055489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.158058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.158621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.158753 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.159117 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.159480 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.197262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"redhat-operators-6x247\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.235470 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.665617 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" containerID="cri-o://0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" gracePeriod=604795 Feb 02 14:56:49 crc kubenswrapper[4869]: I0202 14:56:49.783239 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:56:50 crc kubenswrapper[4869]: I0202 14:56:50.411054 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5afe82-077a-4545-84a3-54f108a39d37" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" exitCode=0 Feb 02 14:56:50 crc kubenswrapper[4869]: I0202 14:56:50.411515 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58"} Feb 02 14:56:50 crc kubenswrapper[4869]: I0202 14:56:50.411567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerStarted","Data":"d15cca6f8345e4d73be82151bb0e28ba11b1504dccb9fda5d84b628c49012abf"} Feb 02 14:56:52 crc kubenswrapper[4869]: I0202 14:56:52.418069 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" containerID="cri-o://7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" gracePeriod=604795 Feb 02 14:56:52 crc kubenswrapper[4869]: I0202 14:56:52.433583 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5afe82-077a-4545-84a3-54f108a39d37" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" exitCode=0 Feb 02 14:56:52 crc kubenswrapper[4869]: I0202 14:56:52.433650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73"} Feb 02 14:56:54 crc kubenswrapper[4869]: I0202 14:56:54.858460 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 02 14:56:55 crc kubenswrapper[4869]: I0202 14:56:55.213650 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 02 14:56:55 crc kubenswrapper[4869]: I0202 14:56:55.499286 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerStarted","Data":"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56"} Feb 02 14:56:55 crc kubenswrapper[4869]: I0202 14:56:55.532428 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6x247" podStartSLOduration=3.116609496 podStartE2EDuration="7.532406762s" podCreationTimestamp="2026-02-02 14:56:48 +0000 UTC" firstStartedPulling="2026-02-02 14:56:50.415150169 +0000 UTC m=+1412.059786939" lastFinishedPulling="2026-02-02 14:56:54.830947435 +0000 UTC m=+1416.475584205" observedRunningTime="2026-02-02 14:56:55.52094041 +0000 UTC m=+1417.165577190" watchObservedRunningTime="2026-02-02 14:56:55.532406762 +0000 UTC m=+1417.177043552" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.286280 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.428870 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.428989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429124 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429271 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429396 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.429591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") pod \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\" (UID: \"b339c96d-7eb1-4359-bcc3-6853622d5aa6\") " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.431073 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.431274 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.431290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.445726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.449358 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.450147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.457155 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr" (OuterVolumeSpecName: "kube-api-access-jfjdr") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "kube-api-access-jfjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.458123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info" (OuterVolumeSpecName: "pod-info") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.527685 4869 generic.go:334] "Generic (PLEG): container finished" podID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" exitCode=0 Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529139 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerDied","Data":"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1"} Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b339c96d-7eb1-4359-bcc3-6853622d5aa6","Type":"ContainerDied","Data":"71fad2894e615ac487036b5543ff5a581a462b5f6ce828abdd4e67eb7d91443b"} Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.529332 4869 scope.go:117] "RemoveContainer" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533794 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjdr\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-kube-api-access-jfjdr\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533824 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533855 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533889 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533899 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533930 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b339c96d-7eb1-4359-bcc3-6853622d5aa6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533939 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b339c96d-7eb1-4359-bcc3-6853622d5aa6-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.533949 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.550252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data" (OuterVolumeSpecName: "config-data") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.557149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf" (OuterVolumeSpecName: "server-conf") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.577949 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.602661 4869 scope.go:117] "RemoveContainer" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.606046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b339c96d-7eb1-4359-bcc3-6853622d5aa6" (UID: "b339c96d-7eb1-4359-bcc3-6853622d5aa6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.628057 4869 scope.go:117] "RemoveContainer" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.628781 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1\": container with ID starting with 0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1 not found: ID does not exist" containerID="0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.628886 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1"} err="failed to get container status \"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1\": rpc error: code = NotFound desc = could not find container \"0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1\": container with ID starting with 0413c209b159d6bae742c77b93755d310367e3aa878efd2e70d95932f5d8e5e1 not found: ID does not exist" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.628965 4869 scope.go:117] "RemoveContainer" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.629387 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7\": container with ID starting with 9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7 not found: ID does not exist" containerID="9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.629489 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7"} err="failed to get container status \"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7\": rpc error: code = NotFound desc = could not find container \"9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7\": container with ID starting with 9ba6b36b1af0f5b3dcbd16ea04d17b7b6053016e832590b9b2d33dd354fff0c7 not found: ID does not exist" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640301 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b339c96d-7eb1-4359-bcc3-6853622d5aa6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640747 4869 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640866 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b339c96d-7eb1-4359-bcc3-6853622d5aa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.640963 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.879206 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.897803 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.912147 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.913033 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="setup-container" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.913151 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="setup-container" Feb 02 14:56:56 crc kubenswrapper[4869]: E0202 14:56:56.913264 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.913340 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.913650 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" containerName="rabbitmq" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.915421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918101 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918400 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.918936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.922639 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.922849 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gjvp4" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.924814 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 14:56:56 crc kubenswrapper[4869]: I0202 14:56:56.936123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049569 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d228ac68-eb5f-494a-bf43-6cbca346ae24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049673 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-config-data\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d228ac68-eb5f-494a-bf43-6cbca346ae24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049777 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fnq\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-kube-api-access-76fnq\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.049853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d228ac68-eb5f-494a-bf43-6cbca346ae24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d228ac68-eb5f-494a-bf43-6cbca346ae24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-config-data\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152491 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152510 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76fnq\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-kube-api-access-76fnq\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152604 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.152636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.153223 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154509 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-config-data\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.154880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.155053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d228ac68-eb5f-494a-bf43-6cbca346ae24-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.157877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d228ac68-eb5f-494a-bf43-6cbca346ae24-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.158843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.159769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.160118 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d228ac68-eb5f-494a-bf43-6cbca346ae24-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.172958 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76fnq\" (UniqueName: \"kubernetes.io/projected/d228ac68-eb5f-494a-bf43-6cbca346ae24-kube-api-access-76fnq\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.193417 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"d228ac68-eb5f-494a-bf43-6cbca346ae24\") " pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.238854 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.477434 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b339c96d-7eb1-4359-bcc3-6853622d5aa6" path="/var/lib/kubelet/pods/b339c96d-7eb1-4359-bcc3-6853622d5aa6/volumes" Feb 02 14:56:57 crc kubenswrapper[4869]: I0202 14:56:57.970978 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 14:56:57 crc kubenswrapper[4869]: W0202 14:56:57.980757 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd228ac68_eb5f_494a_bf43_6cbca346ae24.slice/crio-1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df WatchSource:0}: Error finding container 1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df: Status 404 returned error can't find the container with id 1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df Feb 02 14:56:58 crc kubenswrapper[4869]: I0202 14:56:58.640000 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerStarted","Data":"1a34a0e2d2d9310b9475603e1200965aa415948cbc5864f4bd0d6d919bfdd9df"} Feb 02 14:56:59 crc kubenswrapper[4869]: I0202 14:56:59.236632 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:56:59 crc kubenswrapper[4869]: I0202 14:56:59.237114 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.280536 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6x247" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" probeResult="failure" output=< Feb 02 14:57:00 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 14:57:00 crc kubenswrapper[4869]: > Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.661423 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerStarted","Data":"b9c5ab38ce0f1b23eedeb1840f6aa6cf45b7beba13d99fdded4d92eee9ace4f8"} Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.750323 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.754443 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.757909 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.782713 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946631 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946865 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946901 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:00 crc kubenswrapper[4869]: I0202 14:57:00.946954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048712 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.048910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050637 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.050679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.079744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"dnsmasq-dns-578b8d767c-svw28\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.380182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.536961 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678649 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678718 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678735 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678764 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678780 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678800 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.678914 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") pod \"95035071-a194-40ba-9b64-700ae3121dc4\" (UID: \"95035071-a194-40ba-9b64-700ae3121dc4\") " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.698344 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.701813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.704080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.709789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info" (OuterVolumeSpecName: "pod-info") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.719586 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.722961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.724206 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5" (OuterVolumeSpecName: "kube-api-access-zkxg5") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "kube-api-access-zkxg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.732275 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.732929 4869 generic.go:334] "Generic (PLEG): container finished" podID="95035071-a194-40ba-9b64-700ae3121dc4" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" exitCode=0 Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.733460 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.734127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerDied","Data":"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa"} Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.734181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"95035071-a194-40ba-9b64-700ae3121dc4","Type":"ContainerDied","Data":"4e70c734374d890324e34f318ca08d55436f47c8aef60a353e00fd13a1942965"} Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.734205 4869 scope.go:117] "RemoveContainer" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.757786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data" (OuterVolumeSpecName: "config-data") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.772696 4869 scope.go:117] "RemoveContainer" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798650 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798699 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798715 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798728 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkxg5\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-kube-api-access-zkxg5\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798776 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798790 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95035071-a194-40ba-9b64-700ae3121dc4-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798803 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95035071-a194-40ba-9b64-700ae3121dc4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798818 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.798830 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.823948 4869 scope.go:117] "RemoveContainer" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" Feb 02 14:57:01 crc kubenswrapper[4869]: E0202 14:57:01.824946 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa\": container with ID starting with 7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa not found: ID does not exist" containerID="7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.824983 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa"} err="failed to get container status \"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa\": rpc error: code = NotFound desc = could not find container \"7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa\": container with ID starting with 7424bc1c9c7cdae2d3823efa8ce3a97d00d391e563f4a9867d517d8d6f1cb5fa not found: ID does not exist" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.825011 4869 scope.go:117] "RemoveContainer" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" Feb 02 14:57:01 crc kubenswrapper[4869]: E0202 14:57:01.826768 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93\": container with ID starting with 5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93 not found: ID does not exist" containerID="5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.826798 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93"} err="failed to get container status \"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93\": rpc error: code = NotFound desc = could not find container \"5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93\": container with ID starting with 5ab6d0b5447b4739f514617517db0c41d774b8b7b34e9882a2312ee17d0adf93 not found: ID does not exist" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.836039 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.843449 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf" (OuterVolumeSpecName: "server-conf") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.880510 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "95035071-a194-40ba-9b64-700ae3121dc4" (UID: "95035071-a194-40ba-9b64-700ae3121dc4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.902390 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95035071-a194-40ba-9b64-700ae3121dc4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.902436 4869 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95035071-a194-40ba-9b64-700ae3121dc4-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:01 crc kubenswrapper[4869]: I0202 14:57:01.902449 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.088439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:02 crc kubenswrapper[4869]: W0202 14:57:02.091367 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6110b1ea_6ea9_454e_b77b_7c9d1373e376.slice/crio-6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b WatchSource:0}: Error finding container 6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b: Status 404 returned error can't find the container with id 6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.098256 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.106522 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.156130 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: E0202 14:57:02.157139 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.157161 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" Feb 02 14:57:02 crc kubenswrapper[4869]: E0202 14:57:02.157176 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="setup-container" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.157183 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="setup-container" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.157388 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="95035071-a194-40ba-9b64-700ae3121dc4" containerName="rabbitmq" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.158606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.162756 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gtj7h" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163229 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163441 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163690 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.163837 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.164071 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.172697 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322440 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebc9110-3186-4c3f-877b-44061d345584-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qbk\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-kube-api-access-r5qbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebc9110-3186-4c3f-877b-44061d345584-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322865 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.322935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.424860 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5qbk\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-kube-api-access-r5qbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.424910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.426910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebc9110-3186-4c3f-877b-44061d345584-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebc9110-3186-4c3f-877b-44061d345584-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.427740 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.428256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.428608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.429255 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.429909 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.431539 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cebc9110-3186-4c3f-877b-44061d345584-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.433173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.433818 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.434355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cebc9110-3186-4c3f-877b-44061d345584-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.434516 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cebc9110-3186-4c3f-877b-44061d345584-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.447883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5qbk\" (UniqueName: \"kubernetes.io/projected/cebc9110-3186-4c3f-877b-44061d345584-kube-api-access-r5qbk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.458345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cebc9110-3186-4c3f-877b-44061d345584\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.596279 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.758915 4869 generic.go:334] "Generic (PLEG): container finished" podID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" exitCode=0 Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.760162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerDied","Data":"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2"} Feb 02 14:57:02 crc kubenswrapper[4869]: I0202 14:57:02.760271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerStarted","Data":"6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b"} Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.075098 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.476483 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95035071-a194-40ba-9b64-700ae3121dc4" path="/var/lib/kubelet/pods/95035071-a194-40ba-9b64-700ae3121dc4/volumes" Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.773727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerStarted","Data":"c7359c171b09799208d5ca9c708ada6778b2861dc2f3c28fb5456f4c1ab1b124"} Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.776483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerStarted","Data":"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e"} Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.776721 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:03 crc kubenswrapper[4869]: I0202 14:57:03.806581 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-svw28" podStartSLOduration=3.806557462 podStartE2EDuration="3.806557462s" podCreationTimestamp="2026-02-02 14:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:03.79631893 +0000 UTC m=+1425.440955710" watchObservedRunningTime="2026-02-02 14:57:03.806557462 +0000 UTC m=+1425.451194242" Feb 02 14:57:04 crc kubenswrapper[4869]: I0202 14:57:04.791294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerStarted","Data":"8ed64fb43d213aab79a419a4cea6e1ee2b793f4685da8dd0e3a8dc8cf9f27616"} Feb 02 14:57:09 crc kubenswrapper[4869]: I0202 14:57:09.290818 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:09 crc kubenswrapper[4869]: I0202 14:57:09.349681 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:09 crc kubenswrapper[4869]: I0202 14:57:09.533060 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:57:10 crc kubenswrapper[4869]: I0202 14:57:10.850375 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6x247" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" containerID="cri-o://b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" gracePeriod=2 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.313529 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.383088 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.440468 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") pod \"4e5afe82-077a-4545-84a3-54f108a39d37\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.440729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") pod \"4e5afe82-077a-4545-84a3-54f108a39d37\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.441055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") pod \"4e5afe82-077a-4545-84a3-54f108a39d37\" (UID: \"4e5afe82-077a-4545-84a3-54f108a39d37\") " Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.442079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities" (OuterVolumeSpecName: "utilities") pod "4e5afe82-077a-4545-84a3-54f108a39d37" (UID: "4e5afe82-077a-4545-84a3-54f108a39d37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.450588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc" (OuterVolumeSpecName: "kube-api-access-vlngc") pod "4e5afe82-077a-4545-84a3-54f108a39d37" (UID: "4e5afe82-077a-4545-84a3-54f108a39d37"). InnerVolumeSpecName "kube-api-access-vlngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.456311 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.456730 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" containerID="cri-o://498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7" gracePeriod=10 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.543504 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.543532 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlngc\" (UniqueName: \"kubernetes.io/projected/4e5afe82-077a-4545-84a3-54f108a39d37-kube-api-access-vlngc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.612715 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e5afe82-077a-4545-84a3-54f108a39d37" (UID: "4e5afe82-077a-4545-84a3-54f108a39d37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.645410 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5afe82-077a-4545-84a3-54f108a39d37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.695732 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 14:57:11 crc kubenswrapper[4869]: E0202 14:57:11.696353 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-content" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.696372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-content" Feb 02 14:57:11 crc kubenswrapper[4869]: E0202 14:57:11.696397 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.696406 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" Feb 02 14:57:11 crc kubenswrapper[4869]: E0202 14:57:11.696426 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-utilities" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.696435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="extract-utilities" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.713563 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" containerName="registry-server" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.718956 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.778300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.857791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.858291 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.858500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.859602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.860064 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.861753 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.876982 4869 generic.go:334] "Generic (PLEG): container finished" podID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerID="498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7" exitCode=0 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.877309 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerDied","Data":"498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7"} Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889111 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e5afe82-077a-4545-84a3-54f108a39d37" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" exitCode=0 Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56"} Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x247" event={"ID":"4e5afe82-077a-4545-84a3-54f108a39d37","Type":"ContainerDied","Data":"d15cca6f8345e4d73be82151bb0e28ba11b1504dccb9fda5d84b628c49012abf"} Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889682 4869 scope.go:117] "RemoveContainer" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.889479 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x247" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.915167 4869 scope.go:117] "RemoveContainer" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.961467 4869 scope.go:117] "RemoveContainer" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.965991 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.966361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.967744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.973220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.974023 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.974607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.975362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:11 crc kubenswrapper[4869]: I0202 14:57:11.992274 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.004249 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"dnsmasq-dns-fbc59fbb7-zltx5\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.013254 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6x247"] Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.075325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.217952 4869 scope.go:117] "RemoveContainer" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" Feb 02 14:57:12 crc kubenswrapper[4869]: E0202 14:57:12.222539 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56\": container with ID starting with b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56 not found: ID does not exist" containerID="b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.222620 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56"} err="failed to get container status \"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56\": rpc error: code = NotFound desc = could not find container \"b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56\": container with ID starting with b70a39da8a6f68b70a9ede4a1e887b8c0b4efdbf037dae4b17a8d652b091aa56 not found: ID does not exist" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.222665 4869 scope.go:117] "RemoveContainer" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" Feb 02 14:57:12 crc kubenswrapper[4869]: E0202 14:57:12.223368 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73\": container with ID starting with 7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73 not found: ID does not exist" containerID="7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.223422 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73"} err="failed to get container status \"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73\": rpc error: code = NotFound desc = could not find container \"7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73\": container with ID starting with 7a5d785f0fb00708688da8a37e6f4ee9357ca29896ac216c780278ccfce0fd73 not found: ID does not exist" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.223462 4869 scope.go:117] "RemoveContainer" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" Feb 02 14:57:12 crc kubenswrapper[4869]: E0202 14:57:12.223856 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58\": container with ID starting with a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58 not found: ID does not exist" containerID="a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.223942 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58"} err="failed to get container status \"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58\": rpc error: code = NotFound desc = could not find container \"a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58\": container with ID starting with a79ee463b83c8c672c902d23f96ef487efc7315c23614e6b1095e261677a1d58 not found: ID does not exist" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.251536 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.377959 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378238 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378394 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.378465 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") pod \"02258ec9-a572-417b-bb4c-35d0e5595e60\" (UID: \"02258ec9-a572-417b-bb4c-35d0e5595e60\") " Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.384766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp" (OuterVolumeSpecName: "kube-api-access-gf2cp") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "kube-api-access-gf2cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.429881 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.435707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.440680 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config" (OuterVolumeSpecName: "config") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.441349 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02258ec9-a572-417b-bb4c-35d0e5595e60" (UID: "02258ec9-a572-417b-bb4c-35d0e5595e60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495546 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf2cp\" (UniqueName: \"kubernetes.io/projected/02258ec9-a572-417b-bb4c-35d0e5595e60-kube-api-access-gf2cp\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495589 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495601 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495614 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.495623 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02258ec9-a572-417b-bb4c-35d0e5595e60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.572739 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 14:57:12 crc kubenswrapper[4869]: W0202 14:57:12.575803 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod886da892_6808_4ff8_8fa4_48ad9cd65843.slice/crio-f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12 WatchSource:0}: Error finding container f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12: Status 404 returned error can't find the container with id f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12 Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.904321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" event={"ID":"02258ec9-a572-417b-bb4c-35d0e5595e60","Type":"ContainerDied","Data":"b0a192cf90b2c34b440565bf71d8167abd947c406c2ba5f06b41ea7ba562f653"} Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.904367 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-44fwt" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.905018 4869 scope.go:117] "RemoveContainer" containerID="498cae76fd0efd9a99b02d25099e7ea5f7e21515cef0ac87aa947252ef9f06c7" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.911288 4869 generic.go:334] "Generic (PLEG): container finished" podID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerID="267d2b5ca4d238e5b769ca48e7a762954290c341c2ea35ac8b67c09d6240f345" exitCode=0 Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.911421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerDied","Data":"267d2b5ca4d238e5b769ca48e7a762954290c341c2ea35ac8b67c09d6240f345"} Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.911521 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerStarted","Data":"f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12"} Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.963370 4869 scope.go:117] "RemoveContainer" containerID="8cf856a4df374f3980cbc2ddc8eb1618f3c5e7b2fc6a969f06245cd19d267eb6" Feb 02 14:57:12 crc kubenswrapper[4869]: I0202 14:57:12.990420 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.000171 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-44fwt"] Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.474958 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" path="/var/lib/kubelet/pods/02258ec9-a572-417b-bb4c-35d0e5595e60/volumes" Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.475627 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5afe82-077a-4545-84a3-54f108a39d37" path="/var/lib/kubelet/pods/4e5afe82-077a-4545-84a3-54f108a39d37/volumes" Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.925373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerStarted","Data":"f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8"} Feb 02 14:57:13 crc kubenswrapper[4869]: I0202 14:57:13.959738 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" podStartSLOduration=2.959708404 podStartE2EDuration="2.959708404s" podCreationTimestamp="2026-02-02 14:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:13.950009566 +0000 UTC m=+1435.594646356" watchObservedRunningTime="2026-02-02 14:57:13.959708404 +0000 UTC m=+1435.604345174" Feb 02 14:57:14 crc kubenswrapper[4869]: I0202 14:57:14.937946 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.078007 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.156605 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.157095 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-svw28" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" containerID="cri-o://7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" gracePeriod=10 Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.695473 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.853737 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854102 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.854361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") pod \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\" (UID: \"6110b1ea-6ea9-454e-b77b-7c9d1373e376\") " Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.862224 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk" (OuterVolumeSpecName: "kube-api-access-lrdpk") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "kube-api-access-lrdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.917262 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.922619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.931524 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config" (OuterVolumeSpecName: "config") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.932624 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.943214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "6110b1ea-6ea9-454e-b77b-7c9d1373e376" (UID: "6110b1ea-6ea9-454e-b77b-7c9d1373e376"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957707 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957768 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957783 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957799 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-config\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957849 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdpk\" (UniqueName: \"kubernetes.io/projected/6110b1ea-6ea9-454e-b77b-7c9d1373e376-kube-api-access-lrdpk\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:22 crc kubenswrapper[4869]: I0202 14:57:22.957863 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6110b1ea-6ea9-454e-b77b-7c9d1373e376-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037033 4869 generic.go:334] "Generic (PLEG): container finished" podID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" exitCode=0 Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerDied","Data":"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e"} Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-svw28" event={"ID":"6110b1ea-6ea9-454e-b77b-7c9d1373e376","Type":"ContainerDied","Data":"6da74cfcf9a508836f6caffda75361ac500a1bb8260cd11317779de516dea74b"} Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037137 4869 scope.go:117] "RemoveContainer" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.037320 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-svw28" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.081873 4869 scope.go:117] "RemoveContainer" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.088068 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.098559 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-svw28"] Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.115617 4869 scope.go:117] "RemoveContainer" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" Feb 02 14:57:23 crc kubenswrapper[4869]: E0202 14:57:23.116550 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e\": container with ID starting with 7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e not found: ID does not exist" containerID="7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.116628 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e"} err="failed to get container status \"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e\": rpc error: code = NotFound desc = could not find container \"7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e\": container with ID starting with 7dfc7d73fe165b141f138133e62e1fc49cba7490ab7676e398ef61f73bafed0e not found: ID does not exist" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.116667 4869 scope.go:117] "RemoveContainer" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" Feb 02 14:57:23 crc kubenswrapper[4869]: E0202 14:57:23.117273 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2\": container with ID starting with 790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2 not found: ID does not exist" containerID="790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.117318 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2"} err="failed to get container status \"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2\": rpc error: code = NotFound desc = could not find container \"790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2\": container with ID starting with 790b0cc76a72b6b983c9a976d3d4dc42773457ce7de286a90b258c75bf6bc1b2 not found: ID does not exist" Feb 02 14:57:23 crc kubenswrapper[4869]: I0202 14:57:23.482074 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" path="/var/lib/kubelet/pods/6110b1ea-6ea9-454e-b77b-7c9d1373e376/volumes" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.841841 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.844486 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.844580 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.844653 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.844747 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="init" Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.844835 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.844936 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: E0202 14:57:27.845049 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.845109 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.845388 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6110b1ea-6ea9-454e-b77b-7c9d1373e376" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.845492 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="02258ec9-a572-417b-bb4c-35d0e5595e60" containerName="dnsmasq-dns" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.846425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.851448 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.853400 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.853765 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.854061 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.862891 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.964771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.964848 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.964896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:27 crc kubenswrapper[4869]: I0202 14:57:27.965049 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.067320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.075285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.075776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.076063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.086943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.187950 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:28 crc kubenswrapper[4869]: I0202 14:57:28.767814 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 14:57:29 crc kubenswrapper[4869]: I0202 14:57:29.103663 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerStarted","Data":"5b8d0ac79d9a381090de5328513e4ac984ba5c97f5a488afb997d250b9c4b276"} Feb 02 14:57:32 crc kubenswrapper[4869]: I0202 14:57:32.137619 4869 generic.go:334] "Generic (PLEG): container finished" podID="d228ac68-eb5f-494a-bf43-6cbca346ae24" containerID="b9c5ab38ce0f1b23eedeb1840f6aa6cf45b7beba13d99fdded4d92eee9ace4f8" exitCode=0 Feb 02 14:57:32 crc kubenswrapper[4869]: I0202 14:57:32.137719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerDied","Data":"b9c5ab38ce0f1b23eedeb1840f6aa6cf45b7beba13d99fdded4d92eee9ace4f8"} Feb 02 14:57:37 crc kubenswrapper[4869]: I0202 14:57:37.217694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerDied","Data":"8ed64fb43d213aab79a419a4cea6e1ee2b793f4685da8dd0e3a8dc8cf9f27616"} Feb 02 14:57:37 crc kubenswrapper[4869]: I0202 14:57:37.217710 4869 generic.go:334] "Generic (PLEG): container finished" podID="cebc9110-3186-4c3f-877b-44061d345584" containerID="8ed64fb43d213aab79a419a4cea6e1ee2b793f4685da8dd0e3a8dc8cf9f27616" exitCode=0 Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.246081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerStarted","Data":"490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a"} Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.249467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d228ac68-eb5f-494a-bf43-6cbca346ae24","Type":"ContainerStarted","Data":"5d09b3992a64c693b0a12274c0ee78e5a8fd50558706d5c9f19bfb09b5c8ce2c"} Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.249854 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.253059 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cebc9110-3186-4c3f-877b-44061d345584","Type":"ContainerStarted","Data":"99d39ff21110e6011c04638632d69e563c1d763e9e580c53e69c86e83fce8681"} Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.253626 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.271922 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" podStartSLOduration=2.7375587660000003 podStartE2EDuration="12.271875529s" podCreationTimestamp="2026-02-02 14:57:27 +0000 UTC" firstStartedPulling="2026-02-02 14:57:28.776400295 +0000 UTC m=+1450.421037075" lastFinishedPulling="2026-02-02 14:57:38.310717068 +0000 UTC m=+1459.955353838" observedRunningTime="2026-02-02 14:57:39.266929557 +0000 UTC m=+1460.911566347" watchObservedRunningTime="2026-02-02 14:57:39.271875529 +0000 UTC m=+1460.916512299" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.303137 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.303106819 podStartE2EDuration="37.303106819s" podCreationTimestamp="2026-02-02 14:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:39.292245871 +0000 UTC m=+1460.936882651" watchObservedRunningTime="2026-02-02 14:57:39.303106819 +0000 UTC m=+1460.947743589" Feb 02 14:57:39 crc kubenswrapper[4869]: I0202 14:57:39.323363 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.323337408 podStartE2EDuration="43.323337408s" podCreationTimestamp="2026-02-02 14:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 14:57:39.315522625 +0000 UTC m=+1460.960159395" watchObservedRunningTime="2026-02-02 14:57:39.323337408 +0000 UTC m=+1460.967974178" Feb 02 14:57:50 crc kubenswrapper[4869]: I0202 14:57:50.363640 4869 generic.go:334] "Generic (PLEG): container finished" podID="3767bf04-261f-4a7b-9639-ae8002718621" containerID="490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a" exitCode=0 Feb 02 14:57:50 crc kubenswrapper[4869]: I0202 14:57:50.363754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerDied","Data":"490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a"} Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.802999 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913645 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.913747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") pod \"3767bf04-261f-4a7b-9639-ae8002718621\" (UID: \"3767bf04-261f-4a7b-9639-ae8002718621\") " Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.921529 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.921514 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh" (OuterVolumeSpecName: "kube-api-access-vcrqh") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "kube-api-access-vcrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.946719 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:51 crc kubenswrapper[4869]: I0202 14:57:51.948361 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory" (OuterVolumeSpecName: "inventory") pod "3767bf04-261f-4a7b-9639-ae8002718621" (UID: "3767bf04-261f-4a7b-9639-ae8002718621"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.017884 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.028165 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.028207 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcrqh\" (UniqueName: \"kubernetes.io/projected/3767bf04-261f-4a7b-9639-ae8002718621-kube-api-access-vcrqh\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.028220 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3767bf04-261f-4a7b-9639-ae8002718621-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.385102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" event={"ID":"3767bf04-261f-4a7b-9639-ae8002718621","Type":"ContainerDied","Data":"5b8d0ac79d9a381090de5328513e4ac984ba5c97f5a488afb997d250b9c4b276"} Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.385171 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8d0ac79d9a381090de5328513e4ac984ba5c97f5a488afb997d250b9c4b276" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.385202 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.479572 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 14:57:52 crc kubenswrapper[4869]: E0202 14:57:52.480206 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3767bf04-261f-4a7b-9639-ae8002718621" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.480231 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3767bf04-261f-4a7b-9639-ae8002718621" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.480559 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3767bf04-261f-4a7b-9639-ae8002718621" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.481590 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485145 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485294 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485305 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.485640 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.494142 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.601676 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644160 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644302 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.644348 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.746766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.747622 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.748277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.748405 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.751898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.751930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.758905 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.770844 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:52 crc kubenswrapper[4869]: I0202 14:57:52.803704 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 14:57:53 crc kubenswrapper[4869]: I0202 14:57:53.477515 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 14:57:54 crc kubenswrapper[4869]: I0202 14:57:54.417374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerStarted","Data":"7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112"} Feb 02 14:57:54 crc kubenswrapper[4869]: I0202 14:57:54.417833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerStarted","Data":"45eb9092023474510986497b58938f8c056cf9410d12598b17849390008c5c0f"} Feb 02 14:57:54 crc kubenswrapper[4869]: I0202 14:57:54.447121 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" podStartSLOduration=1.978949684 podStartE2EDuration="2.447097959s" podCreationTimestamp="2026-02-02 14:57:52 +0000 UTC" firstStartedPulling="2026-02-02 14:57:53.488434189 +0000 UTC m=+1475.133070959" lastFinishedPulling="2026-02-02 14:57:53.956582464 +0000 UTC m=+1475.601219234" observedRunningTime="2026-02-02 14:57:54.437717627 +0000 UTC m=+1476.082354417" watchObservedRunningTime="2026-02-02 14:57:54.447097959 +0000 UTC m=+1476.091734729" Feb 02 14:57:57 crc kubenswrapper[4869]: I0202 14:57:57.245210 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 14:58:15 crc kubenswrapper[4869]: I0202 14:58:15.304790 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:58:15 crc kubenswrapper[4869]: I0202 14:58:15.305579 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:58:27 crc kubenswrapper[4869]: I0202 14:58:27.133390 4869 scope.go:117] "RemoveContainer" containerID="c0eba43d199f953d9626b7c88c284ea5aa7158b0c7b330e5e8b9495c554b8a8e" Feb 02 14:58:27 crc kubenswrapper[4869]: I0202 14:58:27.189472 4869 scope.go:117] "RemoveContainer" containerID="7ceee7ca0afb25fecb47c7d1ea7c643849b3e2a4371bef94fa2e91ed301777b9" Feb 02 14:58:45 crc kubenswrapper[4869]: I0202 14:58:45.304167 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:58:45 crc kubenswrapper[4869]: I0202 14:58:45.305067 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.303968 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.304835 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.304982 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.306034 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.306113 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" gracePeriod=600 Feb 02 14:59:15 crc kubenswrapper[4869]: E0202 14:59:15.440875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.674430 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" exitCode=0 Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.674513 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9"} Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.674614 4869 scope.go:117] "RemoveContainer" containerID="c3ec0a059dffd930eba42e693ac182e4fdbf1c43776c99dc10f1b179ad07b666" Feb 02 14:59:15 crc kubenswrapper[4869]: I0202 14:59:15.675810 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:15 crc kubenswrapper[4869]: E0202 14:59:15.676530 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.294941 4869 scope.go:117] "RemoveContainer" containerID="078449dfe9468d87dcfb0be258a6b0c80818d1519435a1c1a98664100d03e287" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.353431 4869 scope.go:117] "RemoveContainer" containerID="3ff58dbf5363b2269191fc2c45069aa37d4e37d9deb8e85168a1a047ba2bdb49" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.383509 4869 scope.go:117] "RemoveContainer" containerID="40ebd5657fc6913db64b75356da71511856954c30a009f72e56e64db082a3a75" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.404519 4869 scope.go:117] "RemoveContainer" containerID="32b2276ee7015cec85a482c7348af541598ae26c827581362792946efdaef3cb" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.422874 4869 scope.go:117] "RemoveContainer" containerID="5b057f5c2556a8f58e337485429c58bd6088b4c173270d5455938195918cef0b" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.445374 4869 scope.go:117] "RemoveContainer" containerID="905cc60b75ca27e35f349c10d6c12aef2bdd4a6d5c9bab7d3cb7933a0dd27121" Feb 02 14:59:27 crc kubenswrapper[4869]: I0202 14:59:27.489093 4869 scope.go:117] "RemoveContainer" containerID="a55006e3fb4918a87e8df899b7bfb2e8873a9539cc2d1f4703c9dc0c6eae1974" Feb 02 14:59:30 crc kubenswrapper[4869]: I0202 14:59:30.462705 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:30 crc kubenswrapper[4869]: E0202 14:59:30.463434 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.076372 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.079689 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.102168 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.230733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.230813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.231169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.333445 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.333524 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.333619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.334226 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.334264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.359898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"certified-operators-jhqvw\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:34 crc kubenswrapper[4869]: I0202 14:59:34.440604 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.013094 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.874778 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" exitCode=0 Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.874853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935"} Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.875220 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerStarted","Data":"23d37e4273dff81b5dc1819ee91f3581a057e50a765066767ea6b2472724f6e3"} Feb 02 14:59:35 crc kubenswrapper[4869]: I0202 14:59:35.877247 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 14:59:36 crc kubenswrapper[4869]: I0202 14:59:36.885136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerStarted","Data":"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4"} Feb 02 14:59:37 crc kubenswrapper[4869]: I0202 14:59:37.899852 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" exitCode=0 Feb 02 14:59:37 crc kubenswrapper[4869]: I0202 14:59:37.899948 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4"} Feb 02 14:59:38 crc kubenswrapper[4869]: I0202 14:59:38.915094 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerStarted","Data":"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0"} Feb 02 14:59:38 crc kubenswrapper[4869]: I0202 14:59:38.940497 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jhqvw" podStartSLOduration=2.445735593 podStartE2EDuration="4.940461124s" podCreationTimestamp="2026-02-02 14:59:34 +0000 UTC" firstStartedPulling="2026-02-02 14:59:35.877013476 +0000 UTC m=+1577.521650246" lastFinishedPulling="2026-02-02 14:59:38.371739007 +0000 UTC m=+1580.016375777" observedRunningTime="2026-02-02 14:59:38.933757328 +0000 UTC m=+1580.578394098" watchObservedRunningTime="2026-02-02 14:59:38.940461124 +0000 UTC m=+1580.585097914" Feb 02 14:59:42 crc kubenswrapper[4869]: I0202 14:59:42.462737 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:42 crc kubenswrapper[4869]: E0202 14:59:42.463374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 14:59:44 crc kubenswrapper[4869]: I0202 14:59:44.441096 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:44 crc kubenswrapper[4869]: I0202 14:59:44.441642 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:44 crc kubenswrapper[4869]: I0202 14:59:44.496942 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:45 crc kubenswrapper[4869]: I0202 14:59:45.033489 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:45 crc kubenswrapper[4869]: I0202 14:59:45.087539 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.005352 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jhqvw" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" containerID="cri-o://9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" gracePeriod=2 Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.499833 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.694933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") pod \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.695036 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") pod \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.695112 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") pod \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\" (UID: \"8d198208-3d2f-4b1f-986f-0cafce4c5ed5\") " Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.696334 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities" (OuterVolumeSpecName: "utilities") pod "8d198208-3d2f-4b1f-986f-0cafce4c5ed5" (UID: "8d198208-3d2f-4b1f-986f-0cafce4c5ed5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.703489 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d" (OuterVolumeSpecName: "kube-api-access-qvw8d") pod "8d198208-3d2f-4b1f-986f-0cafce4c5ed5" (UID: "8d198208-3d2f-4b1f-986f-0cafce4c5ed5"). InnerVolumeSpecName "kube-api-access-qvw8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.745475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d198208-3d2f-4b1f-986f-0cafce4c5ed5" (UID: "8d198208-3d2f-4b1f-986f-0cafce4c5ed5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.797213 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.797250 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvw8d\" (UniqueName: \"kubernetes.io/projected/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-kube-api-access-qvw8d\") on node \"crc\" DevicePath \"\"" Feb 02 14:59:47 crc kubenswrapper[4869]: I0202 14:59:47.797263 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d198208-3d2f-4b1f-986f-0cafce4c5ed5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017013 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" exitCode=0 Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017059 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhqvw" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0"} Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhqvw" event={"ID":"8d198208-3d2f-4b1f-986f-0cafce4c5ed5","Type":"ContainerDied","Data":"23d37e4273dff81b5dc1819ee91f3581a057e50a765066767ea6b2472724f6e3"} Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.017135 4869 scope.go:117] "RemoveContainer" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.046212 4869 scope.go:117] "RemoveContainer" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.058724 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.077035 4869 scope.go:117] "RemoveContainer" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.083432 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jhqvw"] Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.121827 4869 scope.go:117] "RemoveContainer" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" Feb 02 14:59:48 crc kubenswrapper[4869]: E0202 14:59:48.122205 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0\": container with ID starting with 9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0 not found: ID does not exist" containerID="9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122239 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0"} err="failed to get container status \"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0\": rpc error: code = NotFound desc = could not find container \"9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0\": container with ID starting with 9fa40fafd8d58f974f6b7668eb3db630b5564e3cb859e7790cc2aaa93c2d7af0 not found: ID does not exist" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122263 4869 scope.go:117] "RemoveContainer" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" Feb 02 14:59:48 crc kubenswrapper[4869]: E0202 14:59:48.122854 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4\": container with ID starting with 7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4 not found: ID does not exist" containerID="7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122877 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4"} err="failed to get container status \"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4\": rpc error: code = NotFound desc = could not find container \"7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4\": container with ID starting with 7f4e9e67092d4c79b8fd67f9f52b66b4f45fde4fb572111a31f0e8a148619ee4 not found: ID does not exist" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.122890 4869 scope.go:117] "RemoveContainer" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" Feb 02 14:59:48 crc kubenswrapper[4869]: E0202 14:59:48.123435 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935\": container with ID starting with e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935 not found: ID does not exist" containerID="e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935" Feb 02 14:59:48 crc kubenswrapper[4869]: I0202 14:59:48.123479 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935"} err="failed to get container status \"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935\": rpc error: code = NotFound desc = could not find container \"e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935\": container with ID starting with e09baa670f16d336285f067a40977839a58a20fd0e0c92bbad914ae6d4fb7935 not found: ID does not exist" Feb 02 14:59:49 crc kubenswrapper[4869]: I0202 14:59:49.480012 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" path="/var/lib/kubelet/pods/8d198208-3d2f-4b1f-986f-0cafce4c5ed5/volumes" Feb 02 14:59:57 crc kubenswrapper[4869]: I0202 14:59:57.463887 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 14:59:57 crc kubenswrapper[4869]: E0202 14:59:57.465321 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.164822 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:00:00 crc kubenswrapper[4869]: E0202 15:00:00.167105 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-utilities" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167135 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-utilities" Feb 02 15:00:00 crc kubenswrapper[4869]: E0202 15:00:00.167148 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167154 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" Feb 02 15:00:00 crc kubenswrapper[4869]: E0202 15:00:00.167163 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-content" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167169 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="extract-content" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.167421 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d198208-3d2f-4b1f-986f-0cafce4c5ed5" containerName="registry-server" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.168226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.170239 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.170681 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.183953 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.303180 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.303375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.303411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.405037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.405632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.405672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.406358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.413771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.425244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"collect-profiles-29500740-nx2b6\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.496815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:00 crc kubenswrapper[4869]: I0202 15:00:00.975838 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:00:01 crc kubenswrapper[4869]: I0202 15:00:01.146236 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" event={"ID":"2f7b8e70-b003-44d3-92f8-f3537d98f42f","Type":"ContainerStarted","Data":"3f8d9f91a4b30050fb71c3442bc23915a29c349a2821f57dd5239985970d263f"} Feb 02 15:00:02 crc kubenswrapper[4869]: I0202 15:00:02.168797 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerID="59bc9e2bf2a33d0613a4b3662bade576d4b886a4ed9586484e6fdba35d1e7e34" exitCode=0 Feb 02 15:00:02 crc kubenswrapper[4869]: I0202 15:00:02.169024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" event={"ID":"2f7b8e70-b003-44d3-92f8-f3537d98f42f","Type":"ContainerDied","Data":"59bc9e2bf2a33d0613a4b3662bade576d4b886a4ed9586484e6fdba35d1e7e34"} Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.586078 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.782192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") pod \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.783011 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") pod \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.784099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") pod \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\" (UID: \"2f7b8e70-b003-44d3-92f8-f3537d98f42f\") " Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.784571 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f7b8e70-b003-44d3-92f8-f3537d98f42f" (UID: "2f7b8e70-b003-44d3-92f8-f3537d98f42f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.785733 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f7b8e70-b003-44d3-92f8-f3537d98f42f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.790707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f7b8e70-b003-44d3-92f8-f3537d98f42f" (UID: "2f7b8e70-b003-44d3-92f8-f3537d98f42f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.791218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2" (OuterVolumeSpecName: "kube-api-access-pz9r2") pod "2f7b8e70-b003-44d3-92f8-f3537d98f42f" (UID: "2f7b8e70-b003-44d3-92f8-f3537d98f42f"). InnerVolumeSpecName "kube-api-access-pz9r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.889415 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz9r2\" (UniqueName: \"kubernetes.io/projected/2f7b8e70-b003-44d3-92f8-f3537d98f42f-kube-api-access-pz9r2\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:03 crc kubenswrapper[4869]: I0202 15:00:03.889455 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f7b8e70-b003-44d3-92f8-f3537d98f42f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:04 crc kubenswrapper[4869]: I0202 15:00:04.189138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" event={"ID":"2f7b8e70-b003-44d3-92f8-f3537d98f42f","Type":"ContainerDied","Data":"3f8d9f91a4b30050fb71c3442bc23915a29c349a2821f57dd5239985970d263f"} Feb 02 15:00:04 crc kubenswrapper[4869]: I0202 15:00:04.189205 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f8d9f91a4b30050fb71c3442bc23915a29c349a2821f57dd5239985970d263f" Feb 02 15:00:04 crc kubenswrapper[4869]: I0202 15:00:04.189203 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6" Feb 02 15:00:08 crc kubenswrapper[4869]: I0202 15:00:08.463392 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:08 crc kubenswrapper[4869]: E0202 15:00:08.464414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:19 crc kubenswrapper[4869]: I0202 15:00:19.474834 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:19 crc kubenswrapper[4869]: E0202 15:00:19.477441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.575064 4869 scope.go:117] "RemoveContainer" containerID="5e1911969d52a09a3f503d00bf15dabaee35fcbf98c6c4736cd296556393f67e" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.602617 4869 scope.go:117] "RemoveContainer" containerID="387aa540d9fce181b7f57c5804b421869eb4eb211e3e66410d45ebdcf5c5ae37" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.627057 4869 scope.go:117] "RemoveContainer" containerID="2ff5eb04773bd02ddd0e38f9f431cb9cdb7022ae4b7172a4d8e9ab2f3a0a6a8f" Feb 02 15:00:27 crc kubenswrapper[4869]: I0202 15:00:27.645832 4869 scope.go:117] "RemoveContainer" containerID="ccf60dcebf438ff1d0a8c3f18df6ab3e1154822b6043a57628715b0f9e3564b5" Feb 02 15:00:30 crc kubenswrapper[4869]: I0202 15:00:30.463570 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:30 crc kubenswrapper[4869]: E0202 15:00:30.464233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.135241 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:33 crc kubenswrapper[4869]: E0202 15:00:33.136515 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerName="collect-profiles" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.136548 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerName="collect-profiles" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.136933 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" containerName="collect-profiles" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.140370 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.152016 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.242980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.244404 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.244667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.347520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.348315 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.348892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.349172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.349318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.381481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"community-operators-w7584\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.472671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:33 crc kubenswrapper[4869]: I0202 15:00:33.997130 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:34 crc kubenswrapper[4869]: I0202 15:00:34.508315 4869 generic.go:334] "Generic (PLEG): container finished" podID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" exitCode=0 Feb 02 15:00:34 crc kubenswrapper[4869]: I0202 15:00:34.508368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37"} Feb 02 15:00:34 crc kubenswrapper[4869]: I0202 15:00:34.508399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerStarted","Data":"2759f5de4968e9862c72338bd1f481b3b6b44a2e19fea05d9f93d9a70f06d28a"} Feb 02 15:00:36 crc kubenswrapper[4869]: I0202 15:00:36.536835 4869 generic.go:334] "Generic (PLEG): container finished" podID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" exitCode=0 Feb 02 15:00:36 crc kubenswrapper[4869]: I0202 15:00:36.536958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb"} Feb 02 15:00:37 crc kubenswrapper[4869]: I0202 15:00:37.549364 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerStarted","Data":"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d"} Feb 02 15:00:37 crc kubenswrapper[4869]: I0202 15:00:37.572386 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w7584" podStartSLOduration=2.070460626 podStartE2EDuration="4.572362493s" podCreationTimestamp="2026-02-02 15:00:33 +0000 UTC" firstStartedPulling="2026-02-02 15:00:34.511927811 +0000 UTC m=+1636.156564581" lastFinishedPulling="2026-02-02 15:00:37.013829678 +0000 UTC m=+1638.658466448" observedRunningTime="2026-02-02 15:00:37.569342199 +0000 UTC m=+1639.213978969" watchObservedRunningTime="2026-02-02 15:00:37.572362493 +0000 UTC m=+1639.216999263" Feb 02 15:00:43 crc kubenswrapper[4869]: I0202 15:00:43.473651 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:43 crc kubenswrapper[4869]: I0202 15:00:43.474030 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:43 crc kubenswrapper[4869]: I0202 15:00:43.552959 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:44 crc kubenswrapper[4869]: I0202 15:00:44.158421 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:44 crc kubenswrapper[4869]: I0202 15:00:44.232487 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:44 crc kubenswrapper[4869]: I0202 15:00:44.462452 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:44 crc kubenswrapper[4869]: E0202 15:00:44.462725 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.115016 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w7584" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" containerID="cri-o://c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" gracePeriod=2 Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.227234 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.230609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.244131 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.244507 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.244628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.267708 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.347814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.347960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.347993 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.348970 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.348982 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.378622 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"redhat-marketplace-n4jws\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.612804 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.622729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.651865 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") pod \"844dd20e-3c4a-4900-91d4-5783dc09ffda\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.652408 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") pod \"844dd20e-3c4a-4900-91d4-5783dc09ffda\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.652461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") pod \"844dd20e-3c4a-4900-91d4-5783dc09ffda\" (UID: \"844dd20e-3c4a-4900-91d4-5783dc09ffda\") " Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.653664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities" (OuterVolumeSpecName: "utilities") pod "844dd20e-3c4a-4900-91d4-5783dc09ffda" (UID: "844dd20e-3c4a-4900-91d4-5783dc09ffda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.659604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt" (OuterVolumeSpecName: "kube-api-access-hxgxt") pod "844dd20e-3c4a-4900-91d4-5783dc09ffda" (UID: "844dd20e-3c4a-4900-91d4-5783dc09ffda"). InnerVolumeSpecName "kube-api-access-hxgxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.709945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "844dd20e-3c4a-4900-91d4-5783dc09ffda" (UID: "844dd20e-3c4a-4900-91d4-5783dc09ffda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.753770 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.753815 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844dd20e-3c4a-4900-91d4-5783dc09ffda-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:46 crc kubenswrapper[4869]: I0202 15:00:46.753830 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxgxt\" (UniqueName: \"kubernetes.io/projected/844dd20e-3c4a-4900-91d4-5783dc09ffda-kube-api-access-hxgxt\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.118269 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131343 4869 generic.go:334] "Generic (PLEG): container finished" podID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" exitCode=0 Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d"} Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131451 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7584" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131481 4869 scope.go:117] "RemoveContainer" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.131463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7584" event={"ID":"844dd20e-3c4a-4900-91d4-5783dc09ffda","Type":"ContainerDied","Data":"2759f5de4968e9862c72338bd1f481b3b6b44a2e19fea05d9f93d9a70f06d28a"} Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.153693 4869 scope.go:117] "RemoveContainer" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.184958 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.192640 4869 scope.go:117] "RemoveContainer" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.209669 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w7584"] Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.225310 4869 scope.go:117] "RemoveContainer" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" Feb 02 15:00:47 crc kubenswrapper[4869]: E0202 15:00:47.226472 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d\": container with ID starting with c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d not found: ID does not exist" containerID="c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226516 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d"} err="failed to get container status \"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d\": rpc error: code = NotFound desc = could not find container \"c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d\": container with ID starting with c814b6c1ef3a03b03023af554a15bff73ab1d72a7752ca58e428b2b4fcac4f6d not found: ID does not exist" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226541 4869 scope.go:117] "RemoveContainer" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" Feb 02 15:00:47 crc kubenswrapper[4869]: E0202 15:00:47.226893 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb\": container with ID starting with a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb not found: ID does not exist" containerID="a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226951 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb"} err="failed to get container status \"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb\": rpc error: code = NotFound desc = could not find container \"a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb\": container with ID starting with a95d7cabb335bc850fa2824680213d3d9e9be5ad200791273b708fb718cd35cb not found: ID does not exist" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.226974 4869 scope.go:117] "RemoveContainer" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" Feb 02 15:00:47 crc kubenswrapper[4869]: E0202 15:00:47.227342 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37\": container with ID starting with 30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37 not found: ID does not exist" containerID="30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.227368 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37"} err="failed to get container status \"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37\": rpc error: code = NotFound desc = could not find container \"30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37\": container with ID starting with 30439d5d9add4109e271fb86839cab054603439ae52d80778cb65263faa5cd37 not found: ID does not exist" Feb 02 15:00:47 crc kubenswrapper[4869]: I0202 15:00:47.478112 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" path="/var/lib/kubelet/pods/844dd20e-3c4a-4900-91d4-5783dc09ffda/volumes" Feb 02 15:00:48 crc kubenswrapper[4869]: I0202 15:00:48.144626 4869 generic.go:334] "Generic (PLEG): container finished" podID="e52df171-dd1f-48e9-8dc7-06008925405b" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" exitCode=0 Feb 02 15:00:48 crc kubenswrapper[4869]: I0202 15:00:48.144685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a"} Feb 02 15:00:48 crc kubenswrapper[4869]: I0202 15:00:48.144715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerStarted","Data":"55d614c2f209f450d8b9684eaa80cfc66141e898c9070c7110dcb739f684745a"} Feb 02 15:00:49 crc kubenswrapper[4869]: I0202 15:00:49.157216 4869 generic.go:334] "Generic (PLEG): container finished" podID="e52df171-dd1f-48e9-8dc7-06008925405b" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" exitCode=0 Feb 02 15:00:49 crc kubenswrapper[4869]: I0202 15:00:49.157307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1"} Feb 02 15:00:50 crc kubenswrapper[4869]: I0202 15:00:50.169162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerStarted","Data":"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd"} Feb 02 15:00:50 crc kubenswrapper[4869]: I0202 15:00:50.191098 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n4jws" podStartSLOduration=2.443487557 podStartE2EDuration="4.191074453s" podCreationTimestamp="2026-02-02 15:00:46 +0000 UTC" firstStartedPulling="2026-02-02 15:00:48.148352879 +0000 UTC m=+1649.792989649" lastFinishedPulling="2026-02-02 15:00:49.895939775 +0000 UTC m=+1651.540576545" observedRunningTime="2026-02-02 15:00:50.187521685 +0000 UTC m=+1651.832158455" watchObservedRunningTime="2026-02-02 15:00:50.191074453 +0000 UTC m=+1651.835711233" Feb 02 15:00:56 crc kubenswrapper[4869]: I0202 15:00:56.613334 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:56 crc kubenswrapper[4869]: I0202 15:00:56.613663 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:56 crc kubenswrapper[4869]: I0202 15:00:56.699321 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:57 crc kubenswrapper[4869]: I0202 15:00:57.311874 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:57 crc kubenswrapper[4869]: I0202 15:00:57.388207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.273748 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n4jws" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" containerID="cri-o://f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" gracePeriod=2 Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.471200 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:00:59 crc kubenswrapper[4869]: E0202 15:00:59.472113 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.723556 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.840925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") pod \"e52df171-dd1f-48e9-8dc7-06008925405b\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.841247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") pod \"e52df171-dd1f-48e9-8dc7-06008925405b\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.841289 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") pod \"e52df171-dd1f-48e9-8dc7-06008925405b\" (UID: \"e52df171-dd1f-48e9-8dc7-06008925405b\") " Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.842192 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities" (OuterVolumeSpecName: "utilities") pod "e52df171-dd1f-48e9-8dc7-06008925405b" (UID: "e52df171-dd1f-48e9-8dc7-06008925405b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.854394 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5" (OuterVolumeSpecName: "kube-api-access-p4pz5") pod "e52df171-dd1f-48e9-8dc7-06008925405b" (UID: "e52df171-dd1f-48e9-8dc7-06008925405b"). InnerVolumeSpecName "kube-api-access-p4pz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.877059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e52df171-dd1f-48e9-8dc7-06008925405b" (UID: "e52df171-dd1f-48e9-8dc7-06008925405b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.944483 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.944538 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e52df171-dd1f-48e9-8dc7-06008925405b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:00:59 crc kubenswrapper[4869]: I0202 15:00:59.944556 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4pz5\" (UniqueName: \"kubernetes.io/projected/e52df171-dd1f-48e9-8dc7-06008925405b-kube-api-access-p4pz5\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.163491 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500741-9h6gs"] Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164084 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164114 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164130 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164197 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164212 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164221 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164240 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="extract-content" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164277 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164286 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.164300 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164308 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="extract-utilities" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164557 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.164593 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="844dd20e-3c4a-4900-91d4-5783dc09ffda" containerName="registry-server" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.165468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.182123 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500741-9h6gs"] Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.250750 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.250821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.250982 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.251057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.286999 4869 generic.go:334] "Generic (PLEG): container finished" podID="e52df171-dd1f-48e9-8dc7-06008925405b" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" exitCode=0 Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd"} Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4jws" event={"ID":"e52df171-dd1f-48e9-8dc7-06008925405b","Type":"ContainerDied","Data":"55d614c2f209f450d8b9684eaa80cfc66141e898c9070c7110dcb739f684745a"} Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287123 4869 scope.go:117] "RemoveContainer" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.287320 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4jws" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.327358 4869 scope.go:117] "RemoveContainer" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.328069 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.341233 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4jws"] Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.351021 4869 scope.go:117] "RemoveContainer" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.352766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.352941 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.353034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.353086 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.357820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.358128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.358316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.372131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"keystone-cron-29500741-9h6gs\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.402783 4869 scope.go:117] "RemoveContainer" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.403419 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd\": container with ID starting with f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd not found: ID does not exist" containerID="f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403474 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd"} err="failed to get container status \"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd\": rpc error: code = NotFound desc = could not find container \"f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd\": container with ID starting with f020c1438b0c279f815d7851a746e60891741a3a293fda8028d88076bc06d4bd not found: ID does not exist" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403505 4869 scope.go:117] "RemoveContainer" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.403935 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1\": container with ID starting with df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1 not found: ID does not exist" containerID="df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403970 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1"} err="failed to get container status \"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1\": rpc error: code = NotFound desc = could not find container \"df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1\": container with ID starting with df57e4ea787898223115291b378385a18d9903569e0902932c718a83d1b78ec1 not found: ID does not exist" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.403993 4869 scope.go:117] "RemoveContainer" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" Feb 02 15:01:00 crc kubenswrapper[4869]: E0202 15:01:00.404429 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a\": container with ID starting with 27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a not found: ID does not exist" containerID="27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.404473 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a"} err="failed to get container status \"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a\": rpc error: code = NotFound desc = could not find container \"27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a\": container with ID starting with 27285b96db04d90979d18c04b7aa2da28c059e4a6a92c1358e98e82dc713bd6a not found: ID does not exist" Feb 02 15:01:00 crc kubenswrapper[4869]: I0202 15:01:00.488346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:01 crc kubenswrapper[4869]: I0202 15:01:01.472865 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52df171-dd1f-48e9-8dc7-06008925405b" path="/var/lib/kubelet/pods/e52df171-dd1f-48e9-8dc7-06008925405b/volumes" Feb 02 15:01:01 crc kubenswrapper[4869]: I0202 15:01:01.579867 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500741-9h6gs"] Feb 02 15:01:02 crc kubenswrapper[4869]: I0202 15:01:02.312783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerStarted","Data":"94f7fa1eef8aa02c6c9da7b1e358bd9e6450b0e6b3255bb4c36f552b88386ebc"} Feb 02 15:01:02 crc kubenswrapper[4869]: I0202 15:01:02.313165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerStarted","Data":"a84490eaf8fef5ba7482c489b20bc4e41988271328ca98b054c70e9288d7abae"} Feb 02 15:01:02 crc kubenswrapper[4869]: I0202 15:01:02.339702 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500741-9h6gs" podStartSLOduration=2.3396799489999998 podStartE2EDuration="2.339679949s" podCreationTimestamp="2026-02-02 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:01:02.329924299 +0000 UTC m=+1663.974561079" watchObservedRunningTime="2026-02-02 15:01:02.339679949 +0000 UTC m=+1663.984316719" Feb 02 15:01:04 crc kubenswrapper[4869]: I0202 15:01:04.336062 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerID="94f7fa1eef8aa02c6c9da7b1e358bd9e6450b0e6b3255bb4c36f552b88386ebc" exitCode=0 Feb 02 15:01:04 crc kubenswrapper[4869]: I0202 15:01:04.336156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerDied","Data":"94f7fa1eef8aa02c6c9da7b1e358bd9e6450b0e6b3255bb4c36f552b88386ebc"} Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.696850 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.797617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.798294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.798350 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.798442 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") pod \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\" (UID: \"d6019cb5-097c-4e32-b08f-dd117d4bcdf7\") " Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.807716 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.808325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8" (OuterVolumeSpecName: "kube-api-access-wb5m8") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "kube-api-access-wb5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.827423 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.861819 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data" (OuterVolumeSpecName: "config-data") pod "d6019cb5-097c-4e32-b08f-dd117d4bcdf7" (UID: "d6019cb5-097c-4e32-b08f-dd117d4bcdf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901465 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901545 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901563 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5m8\" (UniqueName: \"kubernetes.io/projected/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-kube-api-access-wb5m8\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:05 crc kubenswrapper[4869]: I0202 15:01:05.901576 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6019cb5-097c-4e32-b08f-dd117d4bcdf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:06 crc kubenswrapper[4869]: I0202 15:01:06.359152 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500741-9h6gs" event={"ID":"d6019cb5-097c-4e32-b08f-dd117d4bcdf7","Type":"ContainerDied","Data":"a84490eaf8fef5ba7482c489b20bc4e41988271328ca98b054c70e9288d7abae"} Feb 02 15:01:06 crc kubenswrapper[4869]: I0202 15:01:06.359200 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84490eaf8fef5ba7482c489b20bc4e41988271328ca98b054c70e9288d7abae" Feb 02 15:01:06 crc kubenswrapper[4869]: I0202 15:01:06.359259 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500741-9h6gs" Feb 02 15:01:13 crc kubenswrapper[4869]: I0202 15:01:13.463896 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:13 crc kubenswrapper[4869]: E0202 15:01:13.464510 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:25 crc kubenswrapper[4869]: I0202 15:01:25.463691 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:25 crc kubenswrapper[4869]: E0202 15:01:25.465042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:31 crc kubenswrapper[4869]: I0202 15:01:31.084596 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 15:01:31 crc kubenswrapper[4869]: I0202 15:01:31.097750 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hqz6l"] Feb 02 15:01:31 crc kubenswrapper[4869]: I0202 15:01:31.477056 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cae9d7b-b1d0-4745-801d-14b5f1e5f959" path="/var/lib/kubelet/pods/2cae9d7b-b1d0-4745-801d-14b5f1e5f959/volumes" Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.044295 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.059547 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.070010 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-6nfjx"] Feb 02 15:01:32 crc kubenswrapper[4869]: I0202 15:01:32.080850 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-de8f-account-create-update-7gxr8"] Feb 02 15:01:33 crc kubenswrapper[4869]: I0202 15:01:33.482401 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ed4541-0cbb-4412-b054-fe72923fc2ba" path="/var/lib/kubelet/pods/57ed4541-0cbb-4412-b054-fe72923fc2ba/volumes" Feb 02 15:01:33 crc kubenswrapper[4869]: I0202 15:01:33.483829 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc85b87e-a9f7-4407-8f88-59b46f424fe5" path="/var/lib/kubelet/pods/fc85b87e-a9f7-4407-8f88-59b46f424fe5/volumes" Feb 02 15:01:36 crc kubenswrapper[4869]: I0202 15:01:36.462994 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:36 crc kubenswrapper[4869]: E0202 15:01:36.463672 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:38 crc kubenswrapper[4869]: I0202 15:01:38.047542 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 15:01:38 crc kubenswrapper[4869]: I0202 15:01:38.060073 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-775d-account-create-update-mc2f8"] Feb 02 15:01:39 crc kubenswrapper[4869]: I0202 15:01:39.480159 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667b6a5a-a090-407f-a4c1-229be7db4fbc" path="/var/lib/kubelet/pods/667b6a5a-a090-407f-a4c1-229be7db4fbc/volumes" Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.036220 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.048275 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.058070 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-66c2-account-create-update-m2vvf"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.067295 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wqbqn"] Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.479877 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663a2e70-1d18-41b3-bc31-7e8b44f00450" path="/var/lib/kubelet/pods/663a2e70-1d18-41b3-bc31-7e8b44f00450/volumes" Feb 02 15:01:41 crc kubenswrapper[4869]: I0202 15:01:41.480875 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695a8791-53fd-414d-af01-753483223d32" path="/var/lib/kubelet/pods/695a8791-53fd-414d-af01-753483223d32/volumes" Feb 02 15:01:43 crc kubenswrapper[4869]: I0202 15:01:43.752775 4869 generic.go:334] "Generic (PLEG): container finished" podID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerID="7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112" exitCode=0 Feb 02 15:01:43 crc kubenswrapper[4869]: I0202 15:01:43.753102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerDied","Data":"7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112"} Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.189045 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.378035 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.378694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.379196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.380404 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") pod \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\" (UID: \"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083\") " Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.387836 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.391577 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv" (OuterVolumeSpecName: "kube-api-access-pcpxv") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "kube-api-access-pcpxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.414351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory" (OuterVolumeSpecName: "inventory") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.431534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" (UID: "ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484132 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484384 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484496 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.484568 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcpxv\" (UniqueName: \"kubernetes.io/projected/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083-kube-api-access-pcpxv\") on node \"crc\" DevicePath \"\"" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.776419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" event={"ID":"ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083","Type":"ContainerDied","Data":"45eb9092023474510986497b58938f8c056cf9410d12598b17849390008c5c0f"} Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.776471 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45eb9092023474510986497b58938f8c056cf9410d12598b17849390008c5c0f" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.776562 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.865786 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:01:45 crc kubenswrapper[4869]: E0202 15:01:45.866336 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866357 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: E0202 15:01:45.866384 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerName="keystone-cron" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866396 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerName="keystone-cron" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866576 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6019cb5-097c-4e32-b08f-dd117d4bcdf7" containerName="keystone-cron" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.866604 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.867272 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.869991 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.870303 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.870447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.870755 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.890219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.994655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.994705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:45 crc kubenswrapper[4869]: I0202 15:01:45.994837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.098108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.098454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.098487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.108602 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.121620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.138581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.188107 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.726610 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:01:46 crc kubenswrapper[4869]: I0202 15:01:46.786593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerStarted","Data":"45cdf02dcf660f423cec4c8cf609c87cf1d944ff266f947e009a6246dcc81363"} Feb 02 15:01:47 crc kubenswrapper[4869]: I0202 15:01:47.801740 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerStarted","Data":"1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03"} Feb 02 15:01:47 crc kubenswrapper[4869]: I0202 15:01:47.821801 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" podStartSLOduration=2.284760652 podStartE2EDuration="2.821773939s" podCreationTimestamp="2026-02-02 15:01:45 +0000 UTC" firstStartedPulling="2026-02-02 15:01:46.7235349 +0000 UTC m=+1708.368171680" lastFinishedPulling="2026-02-02 15:01:47.260548207 +0000 UTC m=+1708.905184967" observedRunningTime="2026-02-02 15:01:47.815488273 +0000 UTC m=+1709.460125043" watchObservedRunningTime="2026-02-02 15:01:47.821773939 +0000 UTC m=+1709.466410709" Feb 02 15:01:50 crc kubenswrapper[4869]: I0202 15:01:50.032311 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 15:01:50 crc kubenswrapper[4869]: I0202 15:01:50.041073 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qx9sp"] Feb 02 15:01:50 crc kubenswrapper[4869]: I0202 15:01:50.462325 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:01:50 crc kubenswrapper[4869]: E0202 15:01:50.462580 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:01:51 crc kubenswrapper[4869]: I0202 15:01:51.475688 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedd0523-58d4-494f-9d04-76029ad9ca4d" path="/var/lib/kubelet/pods/cedd0523-58d4-494f-9d04-76029ad9ca4d/volumes" Feb 02 15:02:05 crc kubenswrapper[4869]: I0202 15:02:05.462568 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:05 crc kubenswrapper[4869]: E0202 15:02:05.463468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:07 crc kubenswrapper[4869]: I0202 15:02:07.051631 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 15:02:07 crc kubenswrapper[4869]: I0202 15:02:07.077678 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nmqdp"] Feb 02 15:02:07 crc kubenswrapper[4869]: I0202 15:02:07.473675 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d01d875-1fd0-4d36-9077-337e2549b17c" path="/var/lib/kubelet/pods/8d01d875-1fd0-4d36-9077-337e2549b17c/volumes" Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.039270 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.052029 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9bcf-account-create-update-pprmg"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.060498 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.068620 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wzwcn"] Feb 02 15:02:20 crc kubenswrapper[4869]: I0202 15:02:20.462889 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:20 crc kubenswrapper[4869]: E0202 15:02:20.464662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:21 crc kubenswrapper[4869]: I0202 15:02:21.474804 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e52e3f-cffb-44c2-9532-d645fa630d61" path="/var/lib/kubelet/pods/66e52e3f-cffb-44c2-9532-d645fa630d61/volumes" Feb 02 15:02:21 crc kubenswrapper[4869]: I0202 15:02:21.475449 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a91413a-aa7c-4564-bf72-53071981cd62" path="/var/lib/kubelet/pods/8a91413a-aa7c-4564-bf72-53071981cd62/volumes" Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.045867 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.057634 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.065608 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.076469 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2561-account-create-update-zwwnx"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.085071 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.092109 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f93f-account-create-update-qbxcg"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.103207 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-kp9g2"] Feb 02 15:02:24 crc kubenswrapper[4869]: I0202 15:02:24.110902 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bznrb"] Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.474082 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa7f6b2-de14-408c-8960-662c2ab0e481" path="/var/lib/kubelet/pods/6aa7f6b2-de14-408c-8960-662c2ab0e481/volumes" Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.475226 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5268e6d-82fe-45d8-a243-d37b326346a6" path="/var/lib/kubelet/pods/b5268e6d-82fe-45d8-a243-d37b326346a6/volumes" Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.476363 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be36a818-4a20-4330-ade7-225a479d7e98" path="/var/lib/kubelet/pods/be36a818-4a20-4330-ade7-225a479d7e98/volumes" Feb 02 15:02:25 crc kubenswrapper[4869]: I0202 15:02:25.477742 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0" path="/var/lib/kubelet/pods/dd14cdd1-49b1-49a6-a683-44fd0cbdd5b0/volumes" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.781868 4869 scope.go:117] "RemoveContainer" containerID="8ad30a46b6571b102d653acdd91c3117aa9caffad9f46651f8d10f3bce6d1da5" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.824896 4869 scope.go:117] "RemoveContainer" containerID="59d9f27d8d1ae8627d4c79fa51d4258f445b3484686b6e2d609c49071e26d3ff" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.879547 4869 scope.go:117] "RemoveContainer" containerID="fd9a1056bb847e46dd277ee512ce8a86dedc30d17b4d1ccaa855457de2552b81" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.927585 4869 scope.go:117] "RemoveContainer" containerID="d6f5aeb4cb8e140e0ec76f751f66f1f3334b226154def23e06d3735565e7a00e" Feb 02 15:02:27 crc kubenswrapper[4869]: I0202 15:02:27.965501 4869 scope.go:117] "RemoveContainer" containerID="bc23c4af30b56127451b57906851e79c3c56f83ff81cbe94961025e57448181c" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.020786 4869 scope.go:117] "RemoveContainer" containerID="6d8d94685f54694bdd3d654fd30340b20f11060d58afcb8b6db65cc019ab404b" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.053435 4869 scope.go:117] "RemoveContainer" containerID="213e1848995e356634b595c82a82047cb0a5c02652baad5bea2863f82f47bdbc" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.073756 4869 scope.go:117] "RemoveContainer" containerID="1e93de4900a661d5dcfe910c46bd9a967faddfa20ef1e38b79c228fa5ebb022d" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.091294 4869 scope.go:117] "RemoveContainer" containerID="df71e565c4a1044f26889a098a902ff1f6378130dffa835480e68b3744d9258f" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.112087 4869 scope.go:117] "RemoveContainer" containerID="9b15642290472abfbc4ace64421c6af055e5988041270bd6769c924998672a78" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.136748 4869 scope.go:117] "RemoveContainer" containerID="78a897732627685686d46c9cdceda0daa9d9401b96294c575ac6408193fb1e9d" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.158374 4869 scope.go:117] "RemoveContainer" containerID="787a10a68dc71dc578d2b7b04e714c6b6fd52e9d48dc7f1b9e14020160b32eec" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.188329 4869 scope.go:117] "RemoveContainer" containerID="a67405c792b46e1c7a87b10db412f756b77b32607171121e6cfbf4745d19567f" Feb 02 15:02:28 crc kubenswrapper[4869]: I0202 15:02:28.209503 4869 scope.go:117] "RemoveContainer" containerID="6bee5e75e372cb2aba6043898d69e0608376d17242ffd94d857f28f9662a9176" Feb 02 15:02:29 crc kubenswrapper[4869]: I0202 15:02:29.027829 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 15:02:29 crc kubenswrapper[4869]: I0202 15:02:29.036879 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6zf6z"] Feb 02 15:02:29 crc kubenswrapper[4869]: I0202 15:02:29.487280 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b3583d5-e064-4a64-89ba-a97a7fcc993d" path="/var/lib/kubelet/pods/2b3583d5-e064-4a64-89ba-a97a7fcc993d/volumes" Feb 02 15:02:31 crc kubenswrapper[4869]: I0202 15:02:31.462669 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:31 crc kubenswrapper[4869]: E0202 15:02:31.463305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:44 crc kubenswrapper[4869]: I0202 15:02:44.463259 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:44 crc kubenswrapper[4869]: E0202 15:02:44.464105 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:55 crc kubenswrapper[4869]: I0202 15:02:55.540291 4869 generic.go:334] "Generic (PLEG): container finished" podID="b13d039a-826a-4431-a147-9550c40460d2" containerID="1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03" exitCode=0 Feb 02 15:02:55 crc kubenswrapper[4869]: I0202 15:02:55.540373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerDied","Data":"1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03"} Feb 02 15:02:56 crc kubenswrapper[4869]: I0202 15:02:56.462965 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:02:56 crc kubenswrapper[4869]: E0202 15:02:56.463581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:02:56 crc kubenswrapper[4869]: I0202 15:02:56.942996 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.111374 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") pod \"b13d039a-826a-4431-a147-9550c40460d2\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.111453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") pod \"b13d039a-826a-4431-a147-9550c40460d2\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.111630 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") pod \"b13d039a-826a-4431-a147-9550c40460d2\" (UID: \"b13d039a-826a-4431-a147-9550c40460d2\") " Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.118322 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4" (OuterVolumeSpecName: "kube-api-access-frkw4") pod "b13d039a-826a-4431-a147-9550c40460d2" (UID: "b13d039a-826a-4431-a147-9550c40460d2"). InnerVolumeSpecName "kube-api-access-frkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.141788 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b13d039a-826a-4431-a147-9550c40460d2" (UID: "b13d039a-826a-4431-a147-9550c40460d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.155203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory" (OuterVolumeSpecName: "inventory") pod "b13d039a-826a-4431-a147-9550c40460d2" (UID: "b13d039a-826a-4431-a147-9550c40460d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.214229 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frkw4\" (UniqueName: \"kubernetes.io/projected/b13d039a-826a-4431-a147-9550c40460d2-kube-api-access-frkw4\") on node \"crc\" DevicePath \"\"" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.214281 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.214295 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13d039a-826a-4431-a147-9550c40460d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.561453 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" event={"ID":"b13d039a-826a-4431-a147-9550c40460d2","Type":"ContainerDied","Data":"45cdf02dcf660f423cec4c8cf609c87cf1d944ff266f947e009a6246dcc81363"} Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.561508 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cdf02dcf660f423cec4c8cf609c87cf1d944ff266f947e009a6246dcc81363" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.561584 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.645990 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:02:57 crc kubenswrapper[4869]: E0202 15:02:57.646640 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13d039a-826a-4431-a147-9550c40460d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.646668 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13d039a-826a-4431-a147-9550c40460d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.647005 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13d039a-826a-4431-a147-9550c40460d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.647979 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.651769 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.652056 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.652789 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.652972 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.665296 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.727878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.728399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.728583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.831108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.831172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.831213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.835195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.835530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.851631 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:57 crc kubenswrapper[4869]: I0202 15:02:57.975915 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:02:58 crc kubenswrapper[4869]: I0202 15:02:58.526110 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:02:58 crc kubenswrapper[4869]: I0202 15:02:58.573621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerStarted","Data":"a92d5787b2d570b9ee527185f349f290dbbb140166f0cf740ed0e7247ebd4c92"} Feb 02 15:02:59 crc kubenswrapper[4869]: I0202 15:02:59.582614 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerStarted","Data":"e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0"} Feb 02 15:02:59 crc kubenswrapper[4869]: I0202 15:02:59.609592 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" podStartSLOduration=2.188686635 podStartE2EDuration="2.609568434s" podCreationTimestamp="2026-02-02 15:02:57 +0000 UTC" firstStartedPulling="2026-02-02 15:02:58.528013388 +0000 UTC m=+1780.172650158" lastFinishedPulling="2026-02-02 15:02:58.948895187 +0000 UTC m=+1780.593531957" observedRunningTime="2026-02-02 15:02:59.608766785 +0000 UTC m=+1781.253403555" watchObservedRunningTime="2026-02-02 15:02:59.609568434 +0000 UTC m=+1781.254205204" Feb 02 15:03:01 crc kubenswrapper[4869]: I0202 15:03:01.079978 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 15:03:01 crc kubenswrapper[4869]: I0202 15:03:01.089591 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hz9pj"] Feb 02 15:03:01 crc kubenswrapper[4869]: I0202 15:03:01.481375 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367199b6-3340-454e-acc5-478f9b35b2df" path="/var/lib/kubelet/pods/367199b6-3340-454e-acc5-478f9b35b2df/volumes" Feb 02 15:03:04 crc kubenswrapper[4869]: I0202 15:03:04.670568 4869 generic.go:334] "Generic (PLEG): container finished" podID="a111a064-b5cf-4489-8262-1aef88170e07" containerID="e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0" exitCode=0 Feb 02 15:03:04 crc kubenswrapper[4869]: I0202 15:03:04.670661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerDied","Data":"e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0"} Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.168372 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.220499 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") pod \"a111a064-b5cf-4489-8262-1aef88170e07\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.220621 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") pod \"a111a064-b5cf-4489-8262-1aef88170e07\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.220819 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") pod \"a111a064-b5cf-4489-8262-1aef88170e07\" (UID: \"a111a064-b5cf-4489-8262-1aef88170e07\") " Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.235152 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64" (OuterVolumeSpecName: "kube-api-access-6vs64") pod "a111a064-b5cf-4489-8262-1aef88170e07" (UID: "a111a064-b5cf-4489-8262-1aef88170e07"). InnerVolumeSpecName "kube-api-access-6vs64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.252540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a111a064-b5cf-4489-8262-1aef88170e07" (UID: "a111a064-b5cf-4489-8262-1aef88170e07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.254998 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory" (OuterVolumeSpecName: "inventory") pod "a111a064-b5cf-4489-8262-1aef88170e07" (UID: "a111a064-b5cf-4489-8262-1aef88170e07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.323332 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.323387 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vs64\" (UniqueName: \"kubernetes.io/projected/a111a064-b5cf-4489-8262-1aef88170e07-kube-api-access-6vs64\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.323404 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a111a064-b5cf-4489-8262-1aef88170e07-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.695957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" event={"ID":"a111a064-b5cf-4489-8262-1aef88170e07","Type":"ContainerDied","Data":"a92d5787b2d570b9ee527185f349f290dbbb140166f0cf740ed0e7247ebd4c92"} Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.696037 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92d5787b2d570b9ee527185f349f290dbbb140166f0cf740ed0e7247ebd4c92" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.696146 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.815482 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:03:06 crc kubenswrapper[4869]: E0202 15:03:06.817217 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a111a064-b5cf-4489-8262-1aef88170e07" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.817248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a111a064-b5cf-4489-8262-1aef88170e07" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.817462 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a111a064-b5cf-4489-8262-1aef88170e07" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.818344 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.823749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.823991 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.824115 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.824538 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.841141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.951728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.951798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:06 crc kubenswrapper[4869]: I0202 15:03:06.952019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.054225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.054289 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.054326 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.055535 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.061933 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.062333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.080188 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-b8vlj\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.085231 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.093132 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.103264 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4fqzr"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.116676 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zxtsl"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.129459 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q447q"] Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.140529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.473338 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f9f47-1ba0-4d37-8597-874a62d9045e" path="/var/lib/kubelet/pods/2a5f9f47-1ba0-4d37-8597-874a62d9045e/volumes" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.474289 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818ee387-cf73-45bc-8925-c234d5fd8ee3" path="/var/lib/kubelet/pods/818ee387-cf73-45bc-8925-c234d5fd8ee3/volumes" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.474829 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b" path="/var/lib/kubelet/pods/f67a8d6b-ae75-4667-9c1a-ac4d2da5d18b/volumes" Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.680026 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:03:07 crc kubenswrapper[4869]: W0202 15:03:07.693939 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda82a77f6_7b23_4723_8ba7_a8754d3cc15f.slice/crio-387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe WatchSource:0}: Error finding container 387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe: Status 404 returned error can't find the container with id 387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe Feb 02 15:03:07 crc kubenswrapper[4869]: I0202 15:03:07.707965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerStarted","Data":"387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe"} Feb 02 15:03:08 crc kubenswrapper[4869]: I0202 15:03:08.729641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerStarted","Data":"6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00"} Feb 02 15:03:08 crc kubenswrapper[4869]: I0202 15:03:08.785425 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" podStartSLOduration=2.349037224 podStartE2EDuration="2.785397955s" podCreationTimestamp="2026-02-02 15:03:06 +0000 UTC" firstStartedPulling="2026-02-02 15:03:07.699130342 +0000 UTC m=+1789.343767112" lastFinishedPulling="2026-02-02 15:03:08.135491073 +0000 UTC m=+1789.780127843" observedRunningTime="2026-02-02 15:03:08.77662874 +0000 UTC m=+1790.421265530" watchObservedRunningTime="2026-02-02 15:03:08.785397955 +0000 UTC m=+1790.430034725" Feb 02 15:03:09 crc kubenswrapper[4869]: I0202 15:03:09.471272 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:09 crc kubenswrapper[4869]: E0202 15:03:09.471688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:23 crc kubenswrapper[4869]: I0202 15:03:23.462947 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:23 crc kubenswrapper[4869]: E0202 15:03:23.464344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:26 crc kubenswrapper[4869]: I0202 15:03:26.035125 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 15:03:26 crc kubenswrapper[4869]: I0202 15:03:26.044833 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-s2dwg"] Feb 02 15:03:27 crc kubenswrapper[4869]: I0202 15:03:27.478726 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e63b99-6d06-44ea-a061-b9f79551126a" path="/var/lib/kubelet/pods/f0e63b99-6d06-44ea-a061-b9f79551126a/volumes" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.432676 4869 scope.go:117] "RemoveContainer" containerID="0aa88d3b57202e0e2723bae5c11f79197f7959d3a183ef080d27b30920dc1f8a" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.479864 4869 scope.go:117] "RemoveContainer" containerID="cecab4e9b99e25e3a70710711bfe9446ff16abe3509be2bbfedce73c81eaeb89" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.526341 4869 scope.go:117] "RemoveContainer" containerID="8962be87127b6e0d3f3ece55fe53f40715482971642999f7d7b74c30b09eeea6" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.577479 4869 scope.go:117] "RemoveContainer" containerID="da76a4a0a2fd91d41e48fb82a3fd0ddaf3e6b22ad0d146b95f9759bc6eb3ab36" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.629721 4869 scope.go:117] "RemoveContainer" containerID="f5f3adb22514a5728bdaa407debd5241eb6b5669db2e00b862292c4751c58656" Feb 02 15:03:28 crc kubenswrapper[4869]: I0202 15:03:28.701476 4869 scope.go:117] "RemoveContainer" containerID="8bb80d715d8f5ab6d26df204394e8bf93606b57fc5408d917fc1dee2b0e16af2" Feb 02 15:03:38 crc kubenswrapper[4869]: I0202 15:03:38.462693 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:38 crc kubenswrapper[4869]: E0202 15:03:38.464101 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:43 crc kubenswrapper[4869]: I0202 15:03:43.040518 4869 generic.go:334] "Generic (PLEG): container finished" podID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerID="6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00" exitCode=0 Feb 02 15:03:43 crc kubenswrapper[4869]: I0202 15:03:43.040644 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerDied","Data":"6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00"} Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.549203 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.716984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") pod \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.717157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") pod \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.717197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") pod \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\" (UID: \"a82a77f6-7b23-4723-8ba7-a8754d3cc15f\") " Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.727443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2" (OuterVolumeSpecName: "kube-api-access-5fqt2") pod "a82a77f6-7b23-4723-8ba7-a8754d3cc15f" (UID: "a82a77f6-7b23-4723-8ba7-a8754d3cc15f"). InnerVolumeSpecName "kube-api-access-5fqt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.750698 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a82a77f6-7b23-4723-8ba7-a8754d3cc15f" (UID: "a82a77f6-7b23-4723-8ba7-a8754d3cc15f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.758664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory" (OuterVolumeSpecName: "inventory") pod "a82a77f6-7b23-4723-8ba7-a8754d3cc15f" (UID: "a82a77f6-7b23-4723-8ba7-a8754d3cc15f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.819896 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.819974 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:44 crc kubenswrapper[4869]: I0202 15:03:44.819991 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqt2\" (UniqueName: \"kubernetes.io/projected/a82a77f6-7b23-4723-8ba7-a8754d3cc15f-kube-api-access-5fqt2\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.060818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" event={"ID":"a82a77f6-7b23-4723-8ba7-a8754d3cc15f","Type":"ContainerDied","Data":"387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe"} Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.060861 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387611e23bbd9f0a107dcee15d93de19267f26e773569abdbbdf3d1e356fedfe" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.060926 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.153274 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:03:45 crc kubenswrapper[4869]: E0202 15:03:45.153674 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.153688 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.153875 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.154871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.158366 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.158813 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.159174 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.169853 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.170532 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.237768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.237852 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.237896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.340235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.340316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.340354 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.346089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.346128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.359301 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:45 crc kubenswrapper[4869]: I0202 15:03:45.471230 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:46 crc kubenswrapper[4869]: I0202 15:03:46.148472 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:03:47 crc kubenswrapper[4869]: I0202 15:03:47.084680 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerStarted","Data":"96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9"} Feb 02 15:03:47 crc kubenswrapper[4869]: I0202 15:03:47.085295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerStarted","Data":"a6463f9b07a19640f75c366e973d6b134385522bb069d063749727ab03943faa"} Feb 02 15:03:47 crc kubenswrapper[4869]: I0202 15:03:47.117498 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" podStartSLOduration=1.700854248 podStartE2EDuration="2.117470053s" podCreationTimestamp="2026-02-02 15:03:45 +0000 UTC" firstStartedPulling="2026-02-02 15:03:46.152375 +0000 UTC m=+1827.797011770" lastFinishedPulling="2026-02-02 15:03:46.568990805 +0000 UTC m=+1828.213627575" observedRunningTime="2026-02-02 15:03:47.109016825 +0000 UTC m=+1828.753653635" watchObservedRunningTime="2026-02-02 15:03:47.117470053 +0000 UTC m=+1828.762106833" Feb 02 15:03:51 crc kubenswrapper[4869]: I0202 15:03:51.127120 4869 generic.go:334] "Generic (PLEG): container finished" podID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerID="96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9" exitCode=0 Feb 02 15:03:51 crc kubenswrapper[4869]: I0202 15:03:51.127240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerDied","Data":"96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9"} Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.627511 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.815175 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") pod \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.815316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") pod \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.815605 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") pod \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\" (UID: \"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56\") " Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.824174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q" (OuterVolumeSpecName: "kube-api-access-gd79q") pod "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" (UID: "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56"). InnerVolumeSpecName "kube-api-access-gd79q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.849983 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" (UID: "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.857088 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory" (OuterVolumeSpecName: "inventory") pod "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" (UID: "7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.918700 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd79q\" (UniqueName: \"kubernetes.io/projected/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-kube-api-access-gd79q\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.918754 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:52 crc kubenswrapper[4869]: I0202 15:03:52.918775 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.150062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" event={"ID":"7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56","Type":"ContainerDied","Data":"a6463f9b07a19640f75c366e973d6b134385522bb069d063749727ab03943faa"} Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.150120 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6463f9b07a19640f75c366e973d6b134385522bb069d063749727ab03943faa" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.150150 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.238815 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:03:53 crc kubenswrapper[4869]: E0202 15:03:53.239601 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.239766 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.240892 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.241881 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.248889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.249274 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.249842 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.250306 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.261675 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.430154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.430686 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.431062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.465055 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:03:53 crc kubenswrapper[4869]: E0202 15:03:53.465897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.532724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.533047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.533090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.538500 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.538672 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.563492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:53 crc kubenswrapper[4869]: I0202 15:03:53.862239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:03:54 crc kubenswrapper[4869]: I0202 15:03:54.476464 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:03:54 crc kubenswrapper[4869]: W0202 15:03:54.480337 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ff5bea9_e74b_4810_b5b4_cc790c7c4289.slice/crio-8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215 WatchSource:0}: Error finding container 8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215: Status 404 returned error can't find the container with id 8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215 Feb 02 15:03:55 crc kubenswrapper[4869]: I0202 15:03:55.198501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerStarted","Data":"8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215"} Feb 02 15:03:56 crc kubenswrapper[4869]: I0202 15:03:56.209142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerStarted","Data":"522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc"} Feb 02 15:03:56 crc kubenswrapper[4869]: I0202 15:03:56.229167 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" podStartSLOduration=2.695998376 podStartE2EDuration="3.229135366s" podCreationTimestamp="2026-02-02 15:03:53 +0000 UTC" firstStartedPulling="2026-02-02 15:03:54.482974534 +0000 UTC m=+1836.127611304" lastFinishedPulling="2026-02-02 15:03:55.016111514 +0000 UTC m=+1836.660748294" observedRunningTime="2026-02-02 15:03:56.226504631 +0000 UTC m=+1837.871141411" watchObservedRunningTime="2026-02-02 15:03:56.229135366 +0000 UTC m=+1837.873772136" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.057432 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.072926 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.086127 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-gssfn"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.098932 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.109660 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.121716 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e113-account-create-update-9fnwx"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.135849 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9kpbk"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.148144 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-z9ktw"] Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.463900 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:04:07 crc kubenswrapper[4869]: E0202 15:04:07.464426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.486668 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1748ab6-c795-414c-a52b-7bf549358524" path="/var/lib/kubelet/pods/b1748ab6-c795-414c-a52b-7bf549358524/volumes" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.487596 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdcf5e33-de9f-408f-8200-6f42fe0d0771" path="/var/lib/kubelet/pods/bdcf5e33-de9f-408f-8200-6f42fe0d0771/volumes" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.488462 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27" path="/var/lib/kubelet/pods/d54d59fb-9c3e-42ea-b21f-56ab4e3a2a27/volumes" Feb 02 15:04:07 crc kubenswrapper[4869]: I0202 15:04:07.489432 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7ca155-a072-4915-b5c5-e0b36a29af9b" path="/var/lib/kubelet/pods/dc7ca155-a072-4915-b5c5-e0b36a29af9b/volumes" Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.030264 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.041315 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.055775 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-68d6-account-create-update-6m8ng"] Feb 02 15:04:08 crc kubenswrapper[4869]: I0202 15:04:08.065191 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-74b0-account-create-update-mdkgh"] Feb 02 15:04:09 crc kubenswrapper[4869]: I0202 15:04:09.475858 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff7e998-18b9-4fbe-906a-d756f7cf16c6" path="/var/lib/kubelet/pods/0ff7e998-18b9-4fbe-906a-d756f7cf16c6/volumes" Feb 02 15:04:09 crc kubenswrapper[4869]: I0202 15:04:09.476932 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c50ffbc-cc89-4adc-ae61-9100df4a3ba1" path="/var/lib/kubelet/pods/2c50ffbc-cc89-4adc-ae61-9100df4a3ba1/volumes" Feb 02 15:04:18 crc kubenswrapper[4869]: I0202 15:04:18.462714 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:04:19 crc kubenswrapper[4869]: I0202 15:04:19.499207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b"} Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.840823 4869 scope.go:117] "RemoveContainer" containerID="65c894d6caff283d8e12ca5ca2f52f63ea73a840cf785e78685f2636257f7088" Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.869020 4869 scope.go:117] "RemoveContainer" containerID="99575408197da6f36edff3800154367961b49a995c8eac1c98ed312b3e5cddeb" Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.915564 4869 scope.go:117] "RemoveContainer" containerID="d596a1a6b4874f02790897366970dbb255c9422002d2101a6f5f167dd8807bca" Feb 02 15:04:28 crc kubenswrapper[4869]: I0202 15:04:28.959264 4869 scope.go:117] "RemoveContainer" containerID="48561ec38ba8e1d863e22aea7226f624c163b5e704dc9c40612b25be2fba3af4" Feb 02 15:04:29 crc kubenswrapper[4869]: I0202 15:04:29.003831 4869 scope.go:117] "RemoveContainer" containerID="7a8d84378031a92f9cb60c774081e0424ba60a9436ccfe3c735c843dfed27fbb" Feb 02 15:04:29 crc kubenswrapper[4869]: I0202 15:04:29.045934 4869 scope.go:117] "RemoveContainer" containerID="94cbdab87b048c1314f2f73c2a849ceaf199319d9270e621070be8b05d642b46" Feb 02 15:04:42 crc kubenswrapper[4869]: I0202 15:04:42.715897 4869 generic.go:334] "Generic (PLEG): container finished" podID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerID="522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc" exitCode=0 Feb 02 15:04:42 crc kubenswrapper[4869]: I0202 15:04:42.716035 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerDied","Data":"522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc"} Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.248201 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.376456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") pod \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.376517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") pod \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.376567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") pod \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\" (UID: \"5ff5bea9-e74b-4810-b5b4-cc790c7c4289\") " Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.384347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6" (OuterVolumeSpecName: "kube-api-access-b4fl6") pod "5ff5bea9-e74b-4810-b5b4-cc790c7c4289" (UID: "5ff5bea9-e74b-4810-b5b4-cc790c7c4289"). InnerVolumeSpecName "kube-api-access-b4fl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.412451 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ff5bea9-e74b-4810-b5b4-cc790c7c4289" (UID: "5ff5bea9-e74b-4810-b5b4-cc790c7c4289"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.416566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory" (OuterVolumeSpecName: "inventory") pod "5ff5bea9-e74b-4810-b5b4-cc790c7c4289" (UID: "5ff5bea9-e74b-4810-b5b4-cc790c7c4289"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.478213 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.478683 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.478698 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4fl6\" (UniqueName: \"kubernetes.io/projected/5ff5bea9-e74b-4810-b5b4-cc790c7c4289-kube-api-access-b4fl6\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.738405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" event={"ID":"5ff5bea9-e74b-4810-b5b4-cc790c7c4289","Type":"ContainerDied","Data":"8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215"} Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.738467 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8028967c55096796c322d6eb0a204d642822fca1b16fa2754c48bb43b8d24215" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.738472 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.858947 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:04:44 crc kubenswrapper[4869]: E0202 15:04:44.859493 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.859520 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.859775 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.860672 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.865861 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.868491 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.868826 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.868921 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.873062 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.989821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.990625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:44 crc kubenswrapper[4869]: I0202 15:04:44.990667 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.093206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.093271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.093411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.100206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.106585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.113975 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"ssh-known-hosts-edpm-deployment-cdsl7\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.180578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.802680 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:04:45 crc kubenswrapper[4869]: I0202 15:04:45.809754 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:04:46 crc kubenswrapper[4869]: I0202 15:04:46.761425 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerStarted","Data":"64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30"} Feb 02 15:04:46 crc kubenswrapper[4869]: I0202 15:04:46.761499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerStarted","Data":"137ad0d914e992def7b05e4f71444f097804e5499b20b256a8d8bf4cc936b429"} Feb 02 15:04:53 crc kubenswrapper[4869]: I0202 15:04:53.831663 4869 generic.go:334] "Generic (PLEG): container finished" podID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerID="64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30" exitCode=0 Feb 02 15:04:53 crc kubenswrapper[4869]: I0202 15:04:53.831799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerDied","Data":"64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30"} Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.300834 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.381041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") pod \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.381203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") pod \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.381242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") pod \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\" (UID: \"caa3992c-a98c-46cf-a41b-772d9b3c92eb\") " Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.390350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr" (OuterVolumeSpecName: "kube-api-access-gtslr") pod "caa3992c-a98c-46cf-a41b-772d9b3c92eb" (UID: "caa3992c-a98c-46cf-a41b-772d9b3c92eb"). InnerVolumeSpecName "kube-api-access-gtslr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.413807 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "caa3992c-a98c-46cf-a41b-772d9b3c92eb" (UID: "caa3992c-a98c-46cf-a41b-772d9b3c92eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.428877 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "caa3992c-a98c-46cf-a41b-772d9b3c92eb" (UID: "caa3992c-a98c-46cf-a41b-772d9b3c92eb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.483436 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtslr\" (UniqueName: \"kubernetes.io/projected/caa3992c-a98c-46cf-a41b-772d9b3c92eb-kube-api-access-gtslr\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.483471 4869 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.483484 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/caa3992c-a98c-46cf-a41b-772d9b3c92eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.876145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" event={"ID":"caa3992c-a98c-46cf-a41b-772d9b3c92eb","Type":"ContainerDied","Data":"137ad0d914e992def7b05e4f71444f097804e5499b20b256a8d8bf4cc936b429"} Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.876242 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="137ad0d914e992def7b05e4f71444f097804e5499b20b256a8d8bf4cc936b429" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.876253 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdsl7" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.953536 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:04:55 crc kubenswrapper[4869]: E0202 15:04:55.954030 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.954052 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.954233 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.954933 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.957809 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.960394 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.961415 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.964749 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:04:55 crc kubenswrapper[4869]: I0202 15:04:55.991368 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.003546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.003647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.003768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.106271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.106404 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.106458 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.114423 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.114758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.141930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8lhvg\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.295296 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.869264 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:04:56 crc kubenswrapper[4869]: I0202 15:04:56.887477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerStarted","Data":"c4d70035f88ebcd6c1428a838c4e4b58e0804e94158de6d2d295a9fdbd95c389"} Feb 02 15:04:57 crc kubenswrapper[4869]: I0202 15:04:57.903373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerStarted","Data":"38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d"} Feb 02 15:04:57 crc kubenswrapper[4869]: I0202 15:04:57.938530 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" podStartSLOduration=2.229065605 podStartE2EDuration="2.938510923s" podCreationTimestamp="2026-02-02 15:04:55 +0000 UTC" firstStartedPulling="2026-02-02 15:04:56.875497194 +0000 UTC m=+1898.520133974" lastFinishedPulling="2026-02-02 15:04:57.584942492 +0000 UTC m=+1899.229579292" observedRunningTime="2026-02-02 15:04:57.929894351 +0000 UTC m=+1899.574531121" watchObservedRunningTime="2026-02-02 15:04:57.938510923 +0000 UTC m=+1899.583147693" Feb 02 15:05:01 crc kubenswrapper[4869]: I0202 15:05:01.058939 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 15:05:01 crc kubenswrapper[4869]: I0202 15:05:01.069082 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s5pkh"] Feb 02 15:05:01 crc kubenswrapper[4869]: I0202 15:05:01.486439 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100a5963-124e-4354-8b5a-fadefef2a0a4" path="/var/lib/kubelet/pods/100a5963-124e-4354-8b5a-fadefef2a0a4/volumes" Feb 02 15:05:05 crc kubenswrapper[4869]: I0202 15:05:05.994443 4869 generic.go:334] "Generic (PLEG): container finished" podID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerID="38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d" exitCode=0 Feb 02 15:05:05 crc kubenswrapper[4869]: I0202 15:05:05.994470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerDied","Data":"38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d"} Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.497354 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.593209 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") pod \"fcac3e6a-7d05-4a46-a045-928dd040027d\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.593563 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") pod \"fcac3e6a-7d05-4a46-a045-928dd040027d\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.593606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") pod \"fcac3e6a-7d05-4a46-a045-928dd040027d\" (UID: \"fcac3e6a-7d05-4a46-a045-928dd040027d\") " Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.600945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2" (OuterVolumeSpecName: "kube-api-access-npjf2") pod "fcac3e6a-7d05-4a46-a045-928dd040027d" (UID: "fcac3e6a-7d05-4a46-a045-928dd040027d"). InnerVolumeSpecName "kube-api-access-npjf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.621376 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcac3e6a-7d05-4a46-a045-928dd040027d" (UID: "fcac3e6a-7d05-4a46-a045-928dd040027d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.635086 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory" (OuterVolumeSpecName: "inventory") pod "fcac3e6a-7d05-4a46-a045-928dd040027d" (UID: "fcac3e6a-7d05-4a46-a045-928dd040027d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.699831 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.699887 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjf2\" (UniqueName: \"kubernetes.io/projected/fcac3e6a-7d05-4a46-a045-928dd040027d-kube-api-access-npjf2\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:07 crc kubenswrapper[4869]: I0202 15:05:07.699940 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcac3e6a-7d05-4a46-a045-928dd040027d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.022154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" event={"ID":"fcac3e6a-7d05-4a46-a045-928dd040027d","Type":"ContainerDied","Data":"c4d70035f88ebcd6c1428a838c4e4b58e0804e94158de6d2d295a9fdbd95c389"} Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.022204 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d70035f88ebcd6c1428a838c4e4b58e0804e94158de6d2d295a9fdbd95c389" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.022624 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.107758 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:05:08 crc kubenswrapper[4869]: E0202 15:05:08.108509 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.108533 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.108764 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.109580 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.111747 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.112591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.112832 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.112869 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.124192 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.211176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.211315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.211388 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.314648 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.314880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.315055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.320603 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.322708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.339826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:08 crc kubenswrapper[4869]: I0202 15:05:08.430284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:09 crc kubenswrapper[4869]: I0202 15:05:09.059710 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:05:10 crc kubenswrapper[4869]: I0202 15:05:10.043514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerStarted","Data":"f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592"} Feb 02 15:05:10 crc kubenswrapper[4869]: I0202 15:05:10.043985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerStarted","Data":"6edcd83683966681890eb9a0b53a8877255f0641e4e312e6e45a47caa7c492a2"} Feb 02 15:05:10 crc kubenswrapper[4869]: I0202 15:05:10.067891 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" podStartSLOduration=1.639370377 podStartE2EDuration="2.067863587s" podCreationTimestamp="2026-02-02 15:05:08 +0000 UTC" firstStartedPulling="2026-02-02 15:05:09.056294763 +0000 UTC m=+1910.700931533" lastFinishedPulling="2026-02-02 15:05:09.484787953 +0000 UTC m=+1911.129424743" observedRunningTime="2026-02-02 15:05:10.064193848 +0000 UTC m=+1911.708830628" watchObservedRunningTime="2026-02-02 15:05:10.067863587 +0000 UTC m=+1911.712500357" Feb 02 15:05:19 crc kubenswrapper[4869]: I0202 15:05:19.135687 4869 generic.go:334] "Generic (PLEG): container finished" podID="a76d27b0-6cf8-4338-9022-1790d9544232" containerID="f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592" exitCode=0 Feb 02 15:05:19 crc kubenswrapper[4869]: I0202 15:05:19.135818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerDied","Data":"f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592"} Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.567230 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.712957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") pod \"a76d27b0-6cf8-4338-9022-1790d9544232\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.713089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") pod \"a76d27b0-6cf8-4338-9022-1790d9544232\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.713234 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") pod \"a76d27b0-6cf8-4338-9022-1790d9544232\" (UID: \"a76d27b0-6cf8-4338-9022-1790d9544232\") " Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.725703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs" (OuterVolumeSpecName: "kube-api-access-578gs") pod "a76d27b0-6cf8-4338-9022-1790d9544232" (UID: "a76d27b0-6cf8-4338-9022-1790d9544232"). InnerVolumeSpecName "kube-api-access-578gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.747738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory" (OuterVolumeSpecName: "inventory") pod "a76d27b0-6cf8-4338-9022-1790d9544232" (UID: "a76d27b0-6cf8-4338-9022-1790d9544232"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.747943 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a76d27b0-6cf8-4338-9022-1790d9544232" (UID: "a76d27b0-6cf8-4338-9022-1790d9544232"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.815741 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578gs\" (UniqueName: \"kubernetes.io/projected/a76d27b0-6cf8-4338-9022-1790d9544232-kube-api-access-578gs\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.815780 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:20 crc kubenswrapper[4869]: I0202 15:05:20.815793 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a76d27b0-6cf8-4338-9022-1790d9544232-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:05:21 crc kubenswrapper[4869]: I0202 15:05:21.173815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" event={"ID":"a76d27b0-6cf8-4338-9022-1790d9544232","Type":"ContainerDied","Data":"6edcd83683966681890eb9a0b53a8877255f0641e4e312e6e45a47caa7c492a2"} Feb 02 15:05:21 crc kubenswrapper[4869]: I0202 15:05:21.173873 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6" Feb 02 15:05:21 crc kubenswrapper[4869]: I0202 15:05:21.173876 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6edcd83683966681890eb9a0b53a8877255f0641e4e312e6e45a47caa7c492a2" Feb 02 15:05:28 crc kubenswrapper[4869]: I0202 15:05:28.062759 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 15:05:28 crc kubenswrapper[4869]: I0202 15:05:28.076307 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2bx2t"] Feb 02 15:05:29 crc kubenswrapper[4869]: I0202 15:05:29.183747 4869 scope.go:117] "RemoveContainer" containerID="ebe1f428461f9ca88e79225425980e308f9e983a005ecc404634b54d8fbf41b8" Feb 02 15:05:29 crc kubenswrapper[4869]: I0202 15:05:29.484161 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0" path="/var/lib/kubelet/pods/3b1a8ed8-1aa8-41b4-8409-f1ae9251a2e0/volumes" Feb 02 15:05:31 crc kubenswrapper[4869]: I0202 15:05:31.046507 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 15:05:31 crc kubenswrapper[4869]: I0202 15:05:31.061835 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfr68"] Feb 02 15:05:31 crc kubenswrapper[4869]: I0202 15:05:31.482686 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4bee65-28e6-4f62-a2b5-b4d9227c5624" path="/var/lib/kubelet/pods/6c4bee65-28e6-4f62-a2b5-b4d9227c5624/volumes" Feb 02 15:06:10 crc kubenswrapper[4869]: I0202 15:06:10.058050 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 15:06:10 crc kubenswrapper[4869]: I0202 15:06:10.066894 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4296x"] Feb 02 15:06:11 crc kubenswrapper[4869]: I0202 15:06:11.474856 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3908c6-0f4b-4b27-8f07-9851e54d845b" path="/var/lib/kubelet/pods/3e3908c6-0f4b-4b27-8f07-9851e54d845b/volumes" Feb 02 15:06:29 crc kubenswrapper[4869]: I0202 15:06:29.273495 4869 scope.go:117] "RemoveContainer" containerID="b0971dd6da0e21634706adc3fb0385fe86a85a8749020d44d9b581485a18729f" Feb 02 15:06:29 crc kubenswrapper[4869]: I0202 15:06:29.328741 4869 scope.go:117] "RemoveContainer" containerID="b53f792df7cff8163ee8a7592ca68143879b985452df8ad4b61543811725bc69" Feb 02 15:06:29 crc kubenswrapper[4869]: I0202 15:06:29.400290 4869 scope.go:117] "RemoveContainer" containerID="38dd79ef05a995974ad73195962d823416fb4b0c857e118492f50f15f1f25c17" Feb 02 15:06:45 crc kubenswrapper[4869]: I0202 15:06:45.304935 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:06:45 crc kubenswrapper[4869]: I0202 15:06:45.305774 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.245787 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:05 crc kubenswrapper[4869]: E0202 15:07:05.246873 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.246893 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.247173 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.248842 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.269835 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.370829 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.371232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.371397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.480789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.481078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.481336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.482784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.487814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.518348 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"redhat-operators-25ggf\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:05 crc kubenswrapper[4869]: I0202 15:07:05.570571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.070661 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.727120 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" exitCode=0 Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.727171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08"} Feb 02 15:07:06 crc kubenswrapper[4869]: I0202 15:07:06.727204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerStarted","Data":"74bd486d0462a49148f30c443349f935cd80e03bad245301c3d04dff5daeb9fe"} Feb 02 15:07:07 crc kubenswrapper[4869]: I0202 15:07:07.761253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerStarted","Data":"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb"} Feb 02 15:07:08 crc kubenswrapper[4869]: I0202 15:07:08.773547 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" exitCode=0 Feb 02 15:07:08 crc kubenswrapper[4869]: I0202 15:07:08.773764 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb"} Feb 02 15:07:09 crc kubenswrapper[4869]: I0202 15:07:09.797211 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerStarted","Data":"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528"} Feb 02 15:07:09 crc kubenswrapper[4869]: I0202 15:07:09.829872 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-25ggf" podStartSLOduration=2.185128238 podStartE2EDuration="4.829851338s" podCreationTimestamp="2026-02-02 15:07:05 +0000 UTC" firstStartedPulling="2026-02-02 15:07:06.729562754 +0000 UTC m=+2028.374199524" lastFinishedPulling="2026-02-02 15:07:09.374285844 +0000 UTC m=+2031.018922624" observedRunningTime="2026-02-02 15:07:09.825786328 +0000 UTC m=+2031.470423098" watchObservedRunningTime="2026-02-02 15:07:09.829851338 +0000 UTC m=+2031.474488108" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.304742 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.305295 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.571357 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.571438 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.626236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.921177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:15 crc kubenswrapper[4869]: I0202 15:07:15.975674 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:17 crc kubenswrapper[4869]: I0202 15:07:17.874870 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-25ggf" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" containerID="cri-o://292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" gracePeriod=2 Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.399091 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.532724 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") pod \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.533069 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") pod \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.533179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") pod \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\" (UID: \"cc4fe44e-d1b4-4a2a-91ae-37134223e21e\") " Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.534438 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities" (OuterVolumeSpecName: "utilities") pod "cc4fe44e-d1b4-4a2a-91ae-37134223e21e" (UID: "cc4fe44e-d1b4-4a2a-91ae-37134223e21e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.543167 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2" (OuterVolumeSpecName: "kube-api-access-wkzz2") pod "cc4fe44e-d1b4-4a2a-91ae-37134223e21e" (UID: "cc4fe44e-d1b4-4a2a-91ae-37134223e21e"). InnerVolumeSpecName "kube-api-access-wkzz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.637736 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzz2\" (UniqueName: \"kubernetes.io/projected/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-kube-api-access-wkzz2\") on node \"crc\" DevicePath \"\"" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.637792 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.719974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc4fe44e-d1b4-4a2a-91ae-37134223e21e" (UID: "cc4fe44e-d1b4-4a2a-91ae-37134223e21e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.739947 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4fe44e-d1b4-4a2a-91ae-37134223e21e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885117 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" exitCode=0 Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885169 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528"} Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885217 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-25ggf" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-25ggf" event={"ID":"cc4fe44e-d1b4-4a2a-91ae-37134223e21e","Type":"ContainerDied","Data":"74bd486d0462a49148f30c443349f935cd80e03bad245301c3d04dff5daeb9fe"} Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.885522 4869 scope.go:117] "RemoveContainer" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.921149 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.927788 4869 scope.go:117] "RemoveContainer" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.930396 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-25ggf"] Feb 02 15:07:18 crc kubenswrapper[4869]: I0202 15:07:18.972046 4869 scope.go:117] "RemoveContainer" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.005370 4869 scope.go:117] "RemoveContainer" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" Feb 02 15:07:19 crc kubenswrapper[4869]: E0202 15:07:19.006090 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528\": container with ID starting with 292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528 not found: ID does not exist" containerID="292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.006163 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528"} err="failed to get container status \"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528\": rpc error: code = NotFound desc = could not find container \"292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528\": container with ID starting with 292a2b4fb777a6b2061ab87e21764976a37aadd9f452e33b1d983e11d8809528 not found: ID does not exist" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.006206 4869 scope.go:117] "RemoveContainer" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" Feb 02 15:07:19 crc kubenswrapper[4869]: E0202 15:07:19.006976 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb\": container with ID starting with d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb not found: ID does not exist" containerID="d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.007016 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb"} err="failed to get container status \"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb\": rpc error: code = NotFound desc = could not find container \"d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb\": container with ID starting with d54d15e2be29de2d62167eca3e40138cbdf67085900a2c10e333fe7cc0affaeb not found: ID does not exist" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.007046 4869 scope.go:117] "RemoveContainer" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" Feb 02 15:07:19 crc kubenswrapper[4869]: E0202 15:07:19.007388 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08\": container with ID starting with b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08 not found: ID does not exist" containerID="b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.007485 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08"} err="failed to get container status \"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08\": rpc error: code = NotFound desc = could not find container \"b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08\": container with ID starting with b0ab511c108bf3cd75630189e6fb7526551ac4e0e6addebbce2bef80e7b31a08 not found: ID does not exist" Feb 02 15:07:19 crc kubenswrapper[4869]: I0202 15:07:19.484255 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" path="/var/lib/kubelet/pods/cc4fe44e-d1b4-4a2a-91ae-37134223e21e/volumes" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.303991 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.304760 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.304824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.305458 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:07:45 crc kubenswrapper[4869]: I0202 15:07:45.305524 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b" gracePeriod=600 Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.177975 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b" exitCode=0 Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.178135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b"} Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.178626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4"} Feb 02 15:07:46 crc kubenswrapper[4869]: I0202 15:07:46.178658 4869 scope.go:117] "RemoveContainer" containerID="bb568e91b917925906d4cd15a98b47052c2c84da815fa877a8c27a8ee02730e9" Feb 02 15:09:08 crc kubenswrapper[4869]: E0202 15:09:08.048249 4869 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.82:55034->38.129.56.82:44151: write tcp 38.129.56.82:55034->38.129.56.82:44151: write: broken pipe Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.433760 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.443712 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.453670 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.460662 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.477203 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8lhvg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.477259 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.480544 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-b8vlj"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.490808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.500382 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.511586 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mfbv6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.520817 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b6wlg"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.532963 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdsl7"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.547045 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d5gf6"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.554793 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qjxvt"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.565369 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.573808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.580243 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.595063 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-pxp6h"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.608874 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rxgww"] Feb 02 15:09:15 crc kubenswrapper[4869]: I0202 15:09:15.615578 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hxtn5"] Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.481229 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3767bf04-261f-4a7b-9639-ae8002718621" path="/var/lib/kubelet/pods/3767bf04-261f-4a7b-9639-ae8002718621/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.482949 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff5bea9-e74b-4810-b5b4-cc790c7c4289" path="/var/lib/kubelet/pods/5ff5bea9-e74b-4810-b5b4-cc790c7c4289/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.484102 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56" path="/var/lib/kubelet/pods/7e30d0ae-e1de-45c4-83ba-0d7f1b7d8d56/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.485218 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a111a064-b5cf-4489-8262-1aef88170e07" path="/var/lib/kubelet/pods/a111a064-b5cf-4489-8262-1aef88170e07/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.487131 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76d27b0-6cf8-4338-9022-1790d9544232" path="/var/lib/kubelet/pods/a76d27b0-6cf8-4338-9022-1790d9544232/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.487788 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82a77f6-7b23-4723-8ba7-a8754d3cc15f" path="/var/lib/kubelet/pods/a82a77f6-7b23-4723-8ba7-a8754d3cc15f/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.488512 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083" path="/var/lib/kubelet/pods/ae817bfe-d5e6-4e69-8b6b-67d4d1d4e083/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.489965 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13d039a-826a-4431-a147-9550c40460d2" path="/var/lib/kubelet/pods/b13d039a-826a-4431-a147-9550c40460d2/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.490678 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa3992c-a98c-46cf-a41b-772d9b3c92eb" path="/var/lib/kubelet/pods/caa3992c-a98c-46cf-a41b-772d9b3c92eb/volumes" Feb 02 15:09:17 crc kubenswrapper[4869]: I0202 15:09:17.491393 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcac3e6a-7d05-4a46-a045-928dd040027d" path="/var/lib/kubelet/pods/fcac3e6a-7d05-4a46-a045-928dd040027d/volumes" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.123861 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d"] Feb 02 15:09:21 crc kubenswrapper[4869]: E0202 15:09:21.125001 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125017 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" Feb 02 15:09:21 crc kubenswrapper[4869]: E0202 15:09:21.125048 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-utilities" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125056 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-utilities" Feb 02 15:09:21 crc kubenswrapper[4869]: E0202 15:09:21.125095 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-content" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125102 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="extract-content" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.125295 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4fe44e-d1b4-4a2a-91ae-37134223e21e" containerName="registry-server" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.126117 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.129733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.129980 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.130158 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.130287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.130392 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.148050 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d"] Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270157 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270699 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.270883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.372963 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.373576 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.380951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.381129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.382373 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.383992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.395554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-d946d\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:21 crc kubenswrapper[4869]: I0202 15:09:21.476662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:22 crc kubenswrapper[4869]: I0202 15:09:22.047300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d"] Feb 02 15:09:22 crc kubenswrapper[4869]: W0202 15:09:22.051272 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ba8528_6790_4df1_92c8_828f0ccd858e.slice/crio-a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab WatchSource:0}: Error finding container a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab: Status 404 returned error can't find the container with id a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab Feb 02 15:09:22 crc kubenswrapper[4869]: I0202 15:09:22.208122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerStarted","Data":"a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab"} Feb 02 15:09:23 crc kubenswrapper[4869]: I0202 15:09:23.220413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerStarted","Data":"34097c075399f58cc0213991bed63c10db09ada52f0b5c23038e8fb7bcde2a18"} Feb 02 15:09:23 crc kubenswrapper[4869]: I0202 15:09:23.244191 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" podStartSLOduration=1.727247861 podStartE2EDuration="2.244168428s" podCreationTimestamp="2026-02-02 15:09:21 +0000 UTC" firstStartedPulling="2026-02-02 15:09:22.054052653 +0000 UTC m=+2163.698689453" lastFinishedPulling="2026-02-02 15:09:22.57097323 +0000 UTC m=+2164.215610020" observedRunningTime="2026-02-02 15:09:23.24177597 +0000 UTC m=+2164.886412750" watchObservedRunningTime="2026-02-02 15:09:23.244168428 +0000 UTC m=+2164.888805218" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.608952 4869 scope.go:117] "RemoveContainer" containerID="1780e4b116d1f7c5ebd11904a615204e47379474971f83c266f93d8577ef7a03" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.679828 4869 scope.go:117] "RemoveContainer" containerID="6541835580f7732c564fce1cfc6a7a903f9541014fbd453cd8d73ffdda64ec00" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.760115 4869 scope.go:117] "RemoveContainer" containerID="490db36993a771e14aff3fe8fc3bd15e52a119fe4a3a15db988f24da87af2b2a" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.798841 4869 scope.go:117] "RemoveContainer" containerID="7d5e25ac19c483d6558c58fba2ace1e684808d4e3b1a821e0d5e58c6d0be0112" Feb 02 15:09:29 crc kubenswrapper[4869]: I0202 15:09:29.878871 4869 scope.go:117] "RemoveContainer" containerID="e77dd6e80ad1057a4bcf30f60becbca014a57b0ad1a2095aca5495f54d7091d0" Feb 02 15:09:34 crc kubenswrapper[4869]: I0202 15:09:34.329715 4869 generic.go:334] "Generic (PLEG): container finished" podID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerID="34097c075399f58cc0213991bed63c10db09ada52f0b5c23038e8fb7bcde2a18" exitCode=0 Feb 02 15:09:34 crc kubenswrapper[4869]: I0202 15:09:34.329840 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerDied","Data":"34097c075399f58cc0213991bed63c10db09ada52f0b5c23038e8fb7bcde2a18"} Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.375273 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.383656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.396707 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.514087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.514456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.514543 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.616560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.616640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.616671 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.617250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.617834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.644026 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"certified-operators-w66l5\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.714163 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:35 crc kubenswrapper[4869]: I0202 15:09:35.880542 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025429 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025538 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.025577 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") pod \"09ba8528-6790-4df1-92c8-828f0ccd858e\" (UID: \"09ba8528-6790-4df1-92c8-828f0ccd858e\") " Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.032003 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.042448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm" (OuterVolumeSpecName: "kube-api-access-p8whm") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "kube-api-access-p8whm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.044668 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph" (OuterVolumeSpecName: "ceph") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.060156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory" (OuterVolumeSpecName: "inventory") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.061663 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09ba8528-6790-4df1-92c8-828f0ccd858e" (UID: "09ba8528-6790-4df1-92c8-828f0ccd858e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128594 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128627 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8whm\" (UniqueName: \"kubernetes.io/projected/09ba8528-6790-4df1-92c8-828f0ccd858e-kube-api-access-p8whm\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128654 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128664 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.128675 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09ba8528-6790-4df1-92c8-828f0ccd858e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.234267 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.351730 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.351727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-d946d" event={"ID":"09ba8528-6790-4df1-92c8-828f0ccd858e","Type":"ContainerDied","Data":"a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab"} Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.352051 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a877d1d3b630d5c78ad7f4baf674801ca14c3c49bddb4e7bc2396c2b97ef40ab" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.354678 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerStarted","Data":"63ed0aa4ae4d75f86ca5c11797083a1158d148802874c80387bd8d541d90c5d0"} Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.446069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2"] Feb 02 15:09:36 crc kubenswrapper[4869]: E0202 15:09:36.446807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.446827 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.447072 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ba8528-6790-4df1-92c8-828f0ccd858e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.447948 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.461611 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2"] Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499104 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499537 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.499432 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.541505 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.541547 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.542493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.542615 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.542679 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: E0202 15:09:36.596583 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca940380_14c0_4d24_950b_7aa523735f62.slice/crio-bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ba8528_6790_4df1_92c8_828f0ccd858e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca940380_14c0_4d24_950b_7aa523735f62.slice/crio-conmon-bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336.scope\": RecentStats: unable to find data in memory cache]" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.645352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.645997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.646067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.646109 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.646148 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.653864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.653942 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.654244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.654454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.669173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:36 crc kubenswrapper[4869]: I0202 15:09:36.839097 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:09:37 crc kubenswrapper[4869]: I0202 15:09:37.376054 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca940380-14c0-4d24-950b-7aa523735f62" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" exitCode=0 Feb 02 15:09:37 crc kubenswrapper[4869]: I0202 15:09:37.376593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336"} Feb 02 15:09:37 crc kubenswrapper[4869]: I0202 15:09:37.437409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2"] Feb 02 15:09:38 crc kubenswrapper[4869]: I0202 15:09:38.394856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerStarted","Data":"f0f59f64f18cd831b0ccbcfaeef9e58c704291972b6c59a787453f7131843bee"} Feb 02 15:09:38 crc kubenswrapper[4869]: I0202 15:09:38.395553 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerStarted","Data":"af2ea32d786cda13426e5b56227ed5b1f4953e3931b299286158fd837d86464e"} Feb 02 15:09:38 crc kubenswrapper[4869]: I0202 15:09:38.423267 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" podStartSLOduration=2.004016996 podStartE2EDuration="2.423162924s" podCreationTimestamp="2026-02-02 15:09:36 +0000 UTC" firstStartedPulling="2026-02-02 15:09:37.450685117 +0000 UTC m=+2179.095321927" lastFinishedPulling="2026-02-02 15:09:37.869831075 +0000 UTC m=+2179.514467855" observedRunningTime="2026-02-02 15:09:38.41764557 +0000 UTC m=+2180.062282350" watchObservedRunningTime="2026-02-02 15:09:38.423162924 +0000 UTC m=+2180.067799714" Feb 02 15:09:39 crc kubenswrapper[4869]: I0202 15:09:39.407994 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca940380-14c0-4d24-950b-7aa523735f62" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" exitCode=0 Feb 02 15:09:39 crc kubenswrapper[4869]: I0202 15:09:39.408086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270"} Feb 02 15:09:40 crc kubenswrapper[4869]: I0202 15:09:40.419184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerStarted","Data":"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866"} Feb 02 15:09:40 crc kubenswrapper[4869]: I0202 15:09:40.451819 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w66l5" podStartSLOduration=2.728468569 podStartE2EDuration="5.451801102s" podCreationTimestamp="2026-02-02 15:09:35 +0000 UTC" firstStartedPulling="2026-02-02 15:09:37.383286144 +0000 UTC m=+2179.027922954" lastFinishedPulling="2026-02-02 15:09:40.106618727 +0000 UTC m=+2181.751255487" observedRunningTime="2026-02-02 15:09:40.446590974 +0000 UTC m=+2182.091227734" watchObservedRunningTime="2026-02-02 15:09:40.451801102 +0000 UTC m=+2182.096437872" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.307018 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.307964 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.716253 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.716343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:45 crc kubenswrapper[4869]: I0202 15:09:45.767873 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:46 crc kubenswrapper[4869]: I0202 15:09:46.544290 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:46 crc kubenswrapper[4869]: I0202 15:09:46.641045 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:48 crc kubenswrapper[4869]: I0202 15:09:48.500769 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w66l5" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" containerID="cri-o://d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" gracePeriod=2 Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.041584 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.148358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") pod \"ca940380-14c0-4d24-950b-7aa523735f62\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.148455 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") pod \"ca940380-14c0-4d24-950b-7aa523735f62\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.148723 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") pod \"ca940380-14c0-4d24-950b-7aa523735f62\" (UID: \"ca940380-14c0-4d24-950b-7aa523735f62\") " Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.149486 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities" (OuterVolumeSpecName: "utilities") pod "ca940380-14c0-4d24-950b-7aa523735f62" (UID: "ca940380-14c0-4d24-950b-7aa523735f62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.157380 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d" (OuterVolumeSpecName: "kube-api-access-8w66d") pod "ca940380-14c0-4d24-950b-7aa523735f62" (UID: "ca940380-14c0-4d24-950b-7aa523735f62"). InnerVolumeSpecName "kube-api-access-8w66d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.211416 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca940380-14c0-4d24-950b-7aa523735f62" (UID: "ca940380-14c0-4d24-950b-7aa523735f62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.251726 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.251762 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w66d\" (UniqueName: \"kubernetes.io/projected/ca940380-14c0-4d24-950b-7aa523735f62-kube-api-access-8w66d\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.251773 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca940380-14c0-4d24-950b-7aa523735f62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518311 4869 generic.go:334] "Generic (PLEG): container finished" podID="ca940380-14c0-4d24-950b-7aa523735f62" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" exitCode=0 Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866"} Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w66l5" event={"ID":"ca940380-14c0-4d24-950b-7aa523735f62","Type":"ContainerDied","Data":"63ed0aa4ae4d75f86ca5c11797083a1158d148802874c80387bd8d541d90c5d0"} Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518486 4869 scope.go:117] "RemoveContainer" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.518783 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w66l5" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.558749 4869 scope.go:117] "RemoveContainer" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.567733 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.579491 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w66l5"] Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.591327 4869 scope.go:117] "RemoveContainer" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.637177 4869 scope.go:117] "RemoveContainer" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" Feb 02 15:09:49 crc kubenswrapper[4869]: E0202 15:09:49.637840 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866\": container with ID starting with d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866 not found: ID does not exist" containerID="d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.637895 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866"} err="failed to get container status \"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866\": rpc error: code = NotFound desc = could not find container \"d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866\": container with ID starting with d3d634accf25b0eeb27a7f08bf840654bfd7f7aacc7e903e04e28504ff369866 not found: ID does not exist" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.637937 4869 scope.go:117] "RemoveContainer" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" Feb 02 15:09:49 crc kubenswrapper[4869]: E0202 15:09:49.638192 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270\": container with ID starting with f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270 not found: ID does not exist" containerID="f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.638229 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270"} err="failed to get container status \"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270\": rpc error: code = NotFound desc = could not find container \"f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270\": container with ID starting with f0f8f53ed974a5a9ed2b913d0732feb41c19e0b06a3936da1cab56e1c9228270 not found: ID does not exist" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.638250 4869 scope.go:117] "RemoveContainer" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" Feb 02 15:09:49 crc kubenswrapper[4869]: E0202 15:09:49.638627 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336\": container with ID starting with bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336 not found: ID does not exist" containerID="bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336" Feb 02 15:09:49 crc kubenswrapper[4869]: I0202 15:09:49.638643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336"} err="failed to get container status \"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336\": rpc error: code = NotFound desc = could not find container \"bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336\": container with ID starting with bdf3b014342457e8bb00567ee0d73718800164350e0cbab4b5020d569ab6f336 not found: ID does not exist" Feb 02 15:09:51 crc kubenswrapper[4869]: I0202 15:09:51.478700 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca940380-14c0-4d24-950b-7aa523735f62" path="/var/lib/kubelet/pods/ca940380-14c0-4d24-950b-7aa523735f62/volumes" Feb 02 15:10:15 crc kubenswrapper[4869]: I0202 15:10:15.304708 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:10:15 crc kubenswrapper[4869]: I0202 15:10:15.305527 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:10:30 crc kubenswrapper[4869]: I0202 15:10:30.054788 4869 scope.go:117] "RemoveContainer" containerID="522dc6652d2770764863c6c5c08ccb158c6f223a2af2d2d164167c9020c3eadc" Feb 02 15:10:30 crc kubenswrapper[4869]: I0202 15:10:30.112375 4869 scope.go:117] "RemoveContainer" containerID="96680a39ea5859acbd3d0dd33516c2456928e17934810aa50411921bfa3dafe9" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.304958 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.305695 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.305766 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.306872 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:10:45 crc kubenswrapper[4869]: I0202 15:10:45.307061 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" gracePeriod=600 Feb 02 15:10:45 crc kubenswrapper[4869]: E0202 15:10:45.438515 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.116443 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" exitCode=0 Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.116707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4"} Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.116803 4869 scope.go:117] "RemoveContainer" containerID="e5aab5a7e46c199e806a7282ef101de94b7514934575e3f06631d7f5db57da1b" Feb 02 15:10:46 crc kubenswrapper[4869]: I0202 15:10:46.118268 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:10:46 crc kubenswrapper[4869]: E0202 15:10:46.119098 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.755665 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:10:52 crc kubenswrapper[4869]: E0202 15:10:52.757055 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-utilities" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757080 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-utilities" Feb 02 15:10:52 crc kubenswrapper[4869]: E0202 15:10:52.757106 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757121 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" Feb 02 15:10:52 crc kubenswrapper[4869]: E0202 15:10:52.757154 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-content" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757165 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="extract-content" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.757407 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca940380-14c0-4d24-950b-7aa523735f62" containerName="registry-server" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.759359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.785019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.862718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.862783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.862853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.965800 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.965886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.966029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.966834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.971372 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:52 crc kubenswrapper[4869]: I0202 15:10:52.988190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"community-operators-rfjq8\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:53 crc kubenswrapper[4869]: I0202 15:10:53.094337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:10:53 crc kubenswrapper[4869]: I0202 15:10:53.666787 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.209996 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" exitCode=0 Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.210064 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4"} Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.210101 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerStarted","Data":"ef25471803dfe9339a9d1b0293283644c98a8f02010d70dbd37f66e7576d60e8"} Feb 02 15:10:54 crc kubenswrapper[4869]: I0202 15:10:54.214561 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:10:56 crc kubenswrapper[4869]: I0202 15:10:56.233623 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" exitCode=0 Feb 02 15:10:56 crc kubenswrapper[4869]: I0202 15:10:56.233755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f"} Feb 02 15:10:57 crc kubenswrapper[4869]: I0202 15:10:57.251728 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerStarted","Data":"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd"} Feb 02 15:10:57 crc kubenswrapper[4869]: I0202 15:10:57.283660 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rfjq8" podStartSLOduration=2.850367072 podStartE2EDuration="5.283635002s" podCreationTimestamp="2026-02-02 15:10:52 +0000 UTC" firstStartedPulling="2026-02-02 15:10:54.21416428 +0000 UTC m=+2255.858801060" lastFinishedPulling="2026-02-02 15:10:56.6474322 +0000 UTC m=+2258.292068990" observedRunningTime="2026-02-02 15:10:57.27661397 +0000 UTC m=+2258.921250750" watchObservedRunningTime="2026-02-02 15:10:57.283635002 +0000 UTC m=+2258.928271802" Feb 02 15:10:57 crc kubenswrapper[4869]: I0202 15:10:57.463577 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:10:57 crc kubenswrapper[4869]: E0202 15:10:57.463953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.094797 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.095575 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.145615 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.364189 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:03 crc kubenswrapper[4869]: I0202 15:11:03.420689 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.327099 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rfjq8" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" containerID="cri-o://f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" gracePeriod=2 Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.857368 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.880546 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") pod \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.880696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") pod \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.880730 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") pod \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\" (UID: \"1ddeefe1-3e9c-4576-b226-e8c3b6462947\") " Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.881436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities" (OuterVolumeSpecName: "utilities") pod "1ddeefe1-3e9c-4576-b226-e8c3b6462947" (UID: "1ddeefe1-3e9c-4576-b226-e8c3b6462947"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.892290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp" (OuterVolumeSpecName: "kube-api-access-jvhsp") pod "1ddeefe1-3e9c-4576-b226-e8c3b6462947" (UID: "1ddeefe1-3e9c-4576-b226-e8c3b6462947"). InnerVolumeSpecName "kube-api-access-jvhsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.957564 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ddeefe1-3e9c-4576-b226-e8c3b6462947" (UID: "1ddeefe1-3e9c-4576-b226-e8c3b6462947"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.982300 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvhsp\" (UniqueName: \"kubernetes.io/projected/1ddeefe1-3e9c-4576-b226-e8c3b6462947-kube-api-access-jvhsp\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.982351 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:05 crc kubenswrapper[4869]: I0202 15:11:05.982364 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ddeefe1-3e9c-4576-b226-e8c3b6462947-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.342674 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" exitCode=0 Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.342780 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfjq8" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.342796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd"} Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.343286 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfjq8" event={"ID":"1ddeefe1-3e9c-4576-b226-e8c3b6462947","Type":"ContainerDied","Data":"ef25471803dfe9339a9d1b0293283644c98a8f02010d70dbd37f66e7576d60e8"} Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.343317 4869 scope.go:117] "RemoveContainer" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.365018 4869 scope.go:117] "RemoveContainer" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.383003 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.389251 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rfjq8"] Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.400585 4869 scope.go:117] "RemoveContainer" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.443537 4869 scope.go:117] "RemoveContainer" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" Feb 02 15:11:06 crc kubenswrapper[4869]: E0202 15:11:06.444044 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd\": container with ID starting with f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd not found: ID does not exist" containerID="f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444171 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd"} err="failed to get container status \"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd\": rpc error: code = NotFound desc = could not find container \"f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd\": container with ID starting with f98bab4c9551817885ae7bd0ac6cddbe9360d8a4f700dc1c8a3122cd60c3c0fd not found: ID does not exist" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444206 4869 scope.go:117] "RemoveContainer" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" Feb 02 15:11:06 crc kubenswrapper[4869]: E0202 15:11:06.444680 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f\": container with ID starting with e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f not found: ID does not exist" containerID="e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444717 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f"} err="failed to get container status \"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f\": rpc error: code = NotFound desc = could not find container \"e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f\": container with ID starting with e609edea5ae04bba1aecd5ff5edbdd919d4bbe7389d42122011d7f81263df02f not found: ID does not exist" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.444734 4869 scope.go:117] "RemoveContainer" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" Feb 02 15:11:06 crc kubenswrapper[4869]: E0202 15:11:06.445466 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4\": container with ID starting with 7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4 not found: ID does not exist" containerID="7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4" Feb 02 15:11:06 crc kubenswrapper[4869]: I0202 15:11:06.445493 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4"} err="failed to get container status \"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4\": rpc error: code = NotFound desc = could not find container \"7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4\": container with ID starting with 7e7d4979cef079ad5f33a0489707a24a329068055b259db377375a78176454c4 not found: ID does not exist" Feb 02 15:11:07 crc kubenswrapper[4869]: I0202 15:11:07.474144 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" path="/var/lib/kubelet/pods/1ddeefe1-3e9c-4576-b226-e8c3b6462947/volumes" Feb 02 15:11:09 crc kubenswrapper[4869]: I0202 15:11:09.463690 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:09 crc kubenswrapper[4869]: E0202 15:11:09.464718 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:14 crc kubenswrapper[4869]: I0202 15:11:14.426382 4869 generic.go:334] "Generic (PLEG): container finished" podID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerID="f0f59f64f18cd831b0ccbcfaeef9e58c704291972b6c59a787453f7131843bee" exitCode=0 Feb 02 15:11:14 crc kubenswrapper[4869]: I0202 15:11:14.426549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerDied","Data":"f0f59f64f18cd831b0ccbcfaeef9e58c704291972b6c59a787453f7131843bee"} Feb 02 15:11:15 crc kubenswrapper[4869]: I0202 15:11:15.890555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003255 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003337 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003381 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.003454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") pod \"5ca847f3-12e0-43a7-af47-6739dc10627d\" (UID: \"5ca847f3-12e0-43a7-af47-6739dc10627d\") " Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.009324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph" (OuterVolumeSpecName: "ceph") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.009397 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42" (OuterVolumeSpecName: "kube-api-access-pvv42") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "kube-api-access-pvv42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.009931 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.029163 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.032309 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory" (OuterVolumeSpecName: "inventory") pod "5ca847f3-12e0-43a7-af47-6739dc10627d" (UID: "5ca847f3-12e0-43a7-af47-6739dc10627d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106772 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106820 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106835 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106848 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvv42\" (UniqueName: \"kubernetes.io/projected/5ca847f3-12e0-43a7-af47-6739dc10627d-kube-api-access-pvv42\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.106861 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca847f3-12e0-43a7-af47-6739dc10627d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.450389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" event={"ID":"5ca847f3-12e0-43a7-af47-6739dc10627d","Type":"ContainerDied","Data":"af2ea32d786cda13426e5b56227ed5b1f4953e3931b299286158fd837d86464e"} Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.450732 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af2ea32d786cda13426e5b56227ed5b1f4953e3931b299286158fd837d86464e" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.450818 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557230 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47"] Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557741 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557771 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557819 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-content" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557830 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-content" Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557841 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557848 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" Feb 02 15:11:16 crc kubenswrapper[4869]: E0202 15:11:16.557867 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-utilities" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.557874 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="extract-utilities" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.558127 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ddeefe1-3e9c-4576-b226-e8c3b6462947" containerName="registry-server" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.558187 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca847f3-12e0-43a7-af47-6739dc10627d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.559380 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.563850 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.564631 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.564650 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.564741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.565764 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.572567 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47"] Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.718967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.820850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.820947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.820984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.821120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.826340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.827063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.831499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.844434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-txn47\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:16 crc kubenswrapper[4869]: I0202 15:11:16.883110 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:17 crc kubenswrapper[4869]: I0202 15:11:17.441487 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47"] Feb 02 15:11:17 crc kubenswrapper[4869]: I0202 15:11:17.461280 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerStarted","Data":"acf88936080f9b69bbfc59ba61fe21d0d09c169098d92792d7fc2b90aac78878"} Feb 02 15:11:18 crc kubenswrapper[4869]: I0202 15:11:18.474398 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerStarted","Data":"cde84badc546ed3361ad6d70faccac9ff76362cd4f63c4e1c7c03f18d947a8d1"} Feb 02 15:11:18 crc kubenswrapper[4869]: I0202 15:11:18.510177 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" podStartSLOduration=1.963541355 podStartE2EDuration="2.510145819s" podCreationTimestamp="2026-02-02 15:11:16 +0000 UTC" firstStartedPulling="2026-02-02 15:11:17.442873537 +0000 UTC m=+2279.087510307" lastFinishedPulling="2026-02-02 15:11:17.989478001 +0000 UTC m=+2279.634114771" observedRunningTime="2026-02-02 15:11:18.495310295 +0000 UTC m=+2280.139947065" watchObservedRunningTime="2026-02-02 15:11:18.510145819 +0000 UTC m=+2280.154782609" Feb 02 15:11:21 crc kubenswrapper[4869]: I0202 15:11:21.464096 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:21 crc kubenswrapper[4869]: E0202 15:11:21.465260 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:30 crc kubenswrapper[4869]: I0202 15:11:30.242694 4869 scope.go:117] "RemoveContainer" containerID="f55a47c4ff2286da3a6e2327eb568bde4d649c547bbd0bd0f76ad0552dc9b592" Feb 02 15:11:30 crc kubenswrapper[4869]: I0202 15:11:30.295687 4869 scope.go:117] "RemoveContainer" containerID="38d7a89ad8dafd903d91d39613d610dcd9e24c5bf586ce35754a68930252625d" Feb 02 15:11:30 crc kubenswrapper[4869]: I0202 15:11:30.336690 4869 scope.go:117] "RemoveContainer" containerID="64ec45e26a2128c47c0bb7daf081c9f113c4f88a49f073769f3d890df34abd30" Feb 02 15:11:33 crc kubenswrapper[4869]: I0202 15:11:33.463771 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:33 crc kubenswrapper[4869]: E0202 15:11:33.464949 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.721576 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.724329 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.733278 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.890047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.890441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.890570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.993156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.993391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.993478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.994315 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:39 crc kubenswrapper[4869]: I0202 15:11:39.994429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.025887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"redhat-marketplace-tzvff\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.052602 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.585634 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:40 crc kubenswrapper[4869]: I0202 15:11:40.716530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerStarted","Data":"53ae6a61a15772e781d210ab96db6151129525f1ece11bcdfe4cb307a47ab13a"} Feb 02 15:11:41 crc kubenswrapper[4869]: I0202 15:11:41.727180 4869 generic.go:334] "Generic (PLEG): container finished" podID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" exitCode=0 Feb 02 15:11:41 crc kubenswrapper[4869]: I0202 15:11:41.727244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52"} Feb 02 15:11:42 crc kubenswrapper[4869]: I0202 15:11:42.741375 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerStarted","Data":"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5"} Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.754054 4869 generic.go:334] "Generic (PLEG): container finished" podID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerID="cde84badc546ed3361ad6d70faccac9ff76362cd4f63c4e1c7c03f18d947a8d1" exitCode=0 Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.754137 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerDied","Data":"cde84badc546ed3361ad6d70faccac9ff76362cd4f63c4e1c7c03f18d947a8d1"} Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.759213 4869 generic.go:334] "Generic (PLEG): container finished" podID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" exitCode=0 Feb 02 15:11:43 crc kubenswrapper[4869]: I0202 15:11:43.759293 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5"} Feb 02 15:11:44 crc kubenswrapper[4869]: I0202 15:11:44.771241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerStarted","Data":"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145"} Feb 02 15:11:44 crc kubenswrapper[4869]: I0202 15:11:44.813250 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tzvff" podStartSLOduration=3.013288265 podStartE2EDuration="5.813219986s" podCreationTimestamp="2026-02-02 15:11:39 +0000 UTC" firstStartedPulling="2026-02-02 15:11:41.729590678 +0000 UTC m=+2303.374227458" lastFinishedPulling="2026-02-02 15:11:44.529522409 +0000 UTC m=+2306.174159179" observedRunningTime="2026-02-02 15:11:44.801282143 +0000 UTC m=+2306.445918923" watchObservedRunningTime="2026-02-02 15:11:44.813219986 +0000 UTC m=+2306.457856756" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.262779 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.421887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.422056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.422322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.422414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") pod \"19c443c4-baed-4a61-bc6d-bc8ba528e326\" (UID: \"19c443c4-baed-4a61-bc6d-bc8ba528e326\") " Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.430216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78" (OuterVolumeSpecName: "kube-api-access-nfz78") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "kube-api-access-nfz78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.431688 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph" (OuterVolumeSpecName: "ceph") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.461248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory" (OuterVolumeSpecName: "inventory") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.470499 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "19c443c4-baed-4a61-bc6d-bc8ba528e326" (UID: "19c443c4-baed-4a61-bc6d-bc8ba528e326"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525221 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525287 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525299 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfz78\" (UniqueName: \"kubernetes.io/projected/19c443c4-baed-4a61-bc6d-bc8ba528e326-kube-api-access-nfz78\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.525314 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/19c443c4-baed-4a61-bc6d-bc8ba528e326-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.784528 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.784545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-txn47" event={"ID":"19c443c4-baed-4a61-bc6d-bc8ba528e326","Type":"ContainerDied","Data":"acf88936080f9b69bbfc59ba61fe21d0d09c169098d92792d7fc2b90aac78878"} Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.784648 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf88936080f9b69bbfc59ba61fe21d0d09c169098d92792d7fc2b90aac78878" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.881074 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr"] Feb 02 15:11:45 crc kubenswrapper[4869]: E0202 15:11:45.881490 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.881508 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.881683 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c443c4-baed-4a61-bc6d-bc8ba528e326" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.882351 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.884584 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.884737 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.884933 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.885056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.887069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:11:45 crc kubenswrapper[4869]: I0202 15:11:45.898803 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr"] Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.036126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.036210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.036398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.037029 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.140293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.145804 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.147213 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.148436 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.165242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-48vgr\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.202057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:46 crc kubenswrapper[4869]: W0202 15:11:46.753217 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34077009_4156_4523_9f51_24147190e39c.slice/crio-a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92 WatchSource:0}: Error finding container a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92: Status 404 returned error can't find the container with id a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92 Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.755328 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr"] Feb 02 15:11:46 crc kubenswrapper[4869]: I0202 15:11:46.796180 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerStarted","Data":"a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92"} Feb 02 15:11:47 crc kubenswrapper[4869]: I0202 15:11:47.463506 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:11:47 crc kubenswrapper[4869]: E0202 15:11:47.464306 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:11:47 crc kubenswrapper[4869]: I0202 15:11:47.813396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerStarted","Data":"9526758d149497a69e282bca21d274216371b7965602b112ae44ab9d019d3b69"} Feb 02 15:11:47 crc kubenswrapper[4869]: I0202 15:11:47.843710 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" podStartSLOduration=2.400332407 podStartE2EDuration="2.8436878s" podCreationTimestamp="2026-02-02 15:11:45 +0000 UTC" firstStartedPulling="2026-02-02 15:11:46.756652034 +0000 UTC m=+2308.401288814" lastFinishedPulling="2026-02-02 15:11:47.200007427 +0000 UTC m=+2308.844644207" observedRunningTime="2026-02-02 15:11:47.841843975 +0000 UTC m=+2309.486480785" watchObservedRunningTime="2026-02-02 15:11:47.8436878 +0000 UTC m=+2309.488324570" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.053710 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.054269 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.112254 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.927844 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:50 crc kubenswrapper[4869]: I0202 15:11:50.992221 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:52 crc kubenswrapper[4869]: I0202 15:11:52.877999 4869 generic.go:334] "Generic (PLEG): container finished" podID="34077009-4156-4523-9f51-24147190e39c" containerID="9526758d149497a69e282bca21d274216371b7965602b112ae44ab9d019d3b69" exitCode=0 Feb 02 15:11:52 crc kubenswrapper[4869]: I0202 15:11:52.878171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerDied","Data":"9526758d149497a69e282bca21d274216371b7965602b112ae44ab9d019d3b69"} Feb 02 15:11:52 crc kubenswrapper[4869]: I0202 15:11:52.880371 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tzvff" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" containerID="cri-o://538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" gracePeriod=2 Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.344569 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.539352 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") pod \"593827cf-cb4f-4ce4-9600-ed91af9aca43\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.539697 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") pod \"593827cf-cb4f-4ce4-9600-ed91af9aca43\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.540422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") pod \"593827cf-cb4f-4ce4-9600-ed91af9aca43\" (UID: \"593827cf-cb4f-4ce4-9600-ed91af9aca43\") " Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.541245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities" (OuterVolumeSpecName: "utilities") pod "593827cf-cb4f-4ce4-9600-ed91af9aca43" (UID: "593827cf-cb4f-4ce4-9600-ed91af9aca43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.541646 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.549884 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86" (OuterVolumeSpecName: "kube-api-access-rmx86") pod "593827cf-cb4f-4ce4-9600-ed91af9aca43" (UID: "593827cf-cb4f-4ce4-9600-ed91af9aca43"). InnerVolumeSpecName "kube-api-access-rmx86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.564503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "593827cf-cb4f-4ce4-9600-ed91af9aca43" (UID: "593827cf-cb4f-4ce4-9600-ed91af9aca43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.644150 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmx86\" (UniqueName: \"kubernetes.io/projected/593827cf-cb4f-4ce4-9600-ed91af9aca43-kube-api-access-rmx86\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.644199 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/593827cf-cb4f-4ce4-9600-ed91af9aca43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919020 4869 generic.go:334] "Generic (PLEG): container finished" podID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" exitCode=0 Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145"} Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919222 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzvff" event={"ID":"593827cf-cb4f-4ce4-9600-ed91af9aca43","Type":"ContainerDied","Data":"53ae6a61a15772e781d210ab96db6151129525f1ece11bcdfe4cb307a47ab13a"} Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.919250 4869 scope.go:117] "RemoveContainer" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.920601 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzvff" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.949549 4869 scope.go:117] "RemoveContainer" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.971101 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:53 crc kubenswrapper[4869]: I0202 15:11:53.982009 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzvff"] Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.026168 4869 scope.go:117] "RemoveContainer" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.048322 4869 scope.go:117] "RemoveContainer" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" Feb 02 15:11:54 crc kubenswrapper[4869]: E0202 15:11:54.055368 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145\": container with ID starting with 538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145 not found: ID does not exist" containerID="538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.055404 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145"} err="failed to get container status \"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145\": rpc error: code = NotFound desc = could not find container \"538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145\": container with ID starting with 538e6ca812b5b1c5b3b171c000d9da0c66ccd358ac3247292871e51ddcb75145 not found: ID does not exist" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.055430 4869 scope.go:117] "RemoveContainer" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" Feb 02 15:11:54 crc kubenswrapper[4869]: E0202 15:11:54.056770 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5\": container with ID starting with fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5 not found: ID does not exist" containerID="fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.056793 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5"} err="failed to get container status \"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5\": rpc error: code = NotFound desc = could not find container \"fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5\": container with ID starting with fe4cf382acd1bf14e22834b844b94ac5cd7c5c29c35e6a4db43be3dacd92f2b5 not found: ID does not exist" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.056807 4869 scope.go:117] "RemoveContainer" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" Feb 02 15:11:54 crc kubenswrapper[4869]: E0202 15:11:54.058198 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52\": container with ID starting with 4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52 not found: ID does not exist" containerID="4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.058327 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52"} err="failed to get container status \"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52\": rpc error: code = NotFound desc = could not find container \"4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52\": container with ID starting with 4856d49b7f1a3f96659c36d406d4d034330f5ac036ac30938f407fbdaa748b52 not found: ID does not exist" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.388929 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.561844 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.562009 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.562126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.562221 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") pod \"34077009-4156-4523-9f51-24147190e39c\" (UID: \"34077009-4156-4523-9f51-24147190e39c\") " Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.568261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph" (OuterVolumeSpecName: "ceph") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.570067 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf" (OuterVolumeSpecName: "kube-api-access-mqfxf") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "kube-api-access-mqfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.607475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.610531 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory" (OuterVolumeSpecName: "inventory") pod "34077009-4156-4523-9f51-24147190e39c" (UID: "34077009-4156-4523-9f51-24147190e39c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667778 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667818 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667836 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/34077009-4156-4523-9f51-24147190e39c-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.667850 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqfxf\" (UniqueName: \"kubernetes.io/projected/34077009-4156-4523-9f51-24147190e39c-kube-api-access-mqfxf\") on node \"crc\" DevicePath \"\"" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.936939 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" event={"ID":"34077009-4156-4523-9f51-24147190e39c","Type":"ContainerDied","Data":"a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92"} Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.936982 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-48vgr" Feb 02 15:11:54 crc kubenswrapper[4869]: I0202 15:11:54.936997 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9bc560c0658e1a7eed7cea57f46accdc8ecb7a68279209e981378ed5c203d92" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.017361 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc"] Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.017941 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-content" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.017962 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-content" Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.017984 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-utilities" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.017992 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="extract-utilities" Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.018016 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018024 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" Feb 02 15:11:55 crc kubenswrapper[4869]: E0202 15:11:55.018038 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34077009-4156-4523-9f51-24147190e39c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018047 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="34077009-4156-4523-9f51-24147190e39c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018247 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" containerName="registry-server" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.018261 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="34077009-4156-4523-9f51-24147190e39c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.019113 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.021648 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.022208 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.022330 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.021721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.034504 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.056797 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc"] Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.178489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.179432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.179580 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.179702 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.281539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.281711 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.281840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.282136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.287821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.288478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.288500 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.302347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rsvsc\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.346969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.482007 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593827cf-cb4f-4ce4-9600-ed91af9aca43" path="/var/lib/kubelet/pods/593827cf-cb4f-4ce4-9600-ed91af9aca43/volumes" Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.926325 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc"] Feb 02 15:11:55 crc kubenswrapper[4869]: I0202 15:11:55.950834 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerStarted","Data":"384162117dc63ce3f5a7c9c83a29a570f7ffbffa8a5d5c4c94f7c36292e790fc"} Feb 02 15:11:56 crc kubenswrapper[4869]: I0202 15:11:56.964474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerStarted","Data":"8132da2ec517a8421d696587dbb443e080c1257379cee4569885d339f8cbd656"} Feb 02 15:11:57 crc kubenswrapper[4869]: I0202 15:11:57.000812 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" podStartSLOduration=2.570934524 podStartE2EDuration="3.000782345s" podCreationTimestamp="2026-02-02 15:11:54 +0000 UTC" firstStartedPulling="2026-02-02 15:11:55.932360504 +0000 UTC m=+2317.576997294" lastFinishedPulling="2026-02-02 15:11:56.362208315 +0000 UTC m=+2318.006845115" observedRunningTime="2026-02-02 15:11:56.993648329 +0000 UTC m=+2318.638285189" watchObservedRunningTime="2026-02-02 15:11:57.000782345 +0000 UTC m=+2318.645419145" Feb 02 15:12:02 crc kubenswrapper[4869]: I0202 15:12:02.463705 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:02 crc kubenswrapper[4869]: E0202 15:12:02.464904 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:16 crc kubenswrapper[4869]: I0202 15:12:16.462510 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:16 crc kubenswrapper[4869]: E0202 15:12:16.463397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:27 crc kubenswrapper[4869]: I0202 15:12:27.463672 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:27 crc kubenswrapper[4869]: E0202 15:12:27.464997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:31 crc kubenswrapper[4869]: I0202 15:12:31.332686 4869 generic.go:334] "Generic (PLEG): container finished" podID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerID="8132da2ec517a8421d696587dbb443e080c1257379cee4569885d339f8cbd656" exitCode=0 Feb 02 15:12:31 crc kubenswrapper[4869]: I0202 15:12:31.333012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerDied","Data":"8132da2ec517a8421d696587dbb443e080c1257379cee4569885d339f8cbd656"} Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.827175 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878352 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878412 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.878523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") pod \"04202cce-c3c1-483c-9d50-0fcf9a398094\" (UID: \"04202cce-c3c1-483c-9d50-0fcf9a398094\") " Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.886248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb" (OuterVolumeSpecName: "kube-api-access-9nsmb") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "kube-api-access-9nsmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.886688 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph" (OuterVolumeSpecName: "ceph") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.916631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.927764 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory" (OuterVolumeSpecName: "inventory") pod "04202cce-c3c1-483c-9d50-0fcf9a398094" (UID: "04202cce-c3c1-483c-9d50-0fcf9a398094"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981580 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nsmb\" (UniqueName: \"kubernetes.io/projected/04202cce-c3c1-483c-9d50-0fcf9a398094-kube-api-access-9nsmb\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981630 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981651 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:32 crc kubenswrapper[4869]: I0202 15:12:32.981673 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04202cce-c3c1-483c-9d50-0fcf9a398094-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.356444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" event={"ID":"04202cce-c3c1-483c-9d50-0fcf9a398094","Type":"ContainerDied","Data":"384162117dc63ce3f5a7c9c83a29a570f7ffbffa8a5d5c4c94f7c36292e790fc"} Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.356508 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="384162117dc63ce3f5a7c9c83a29a570f7ffbffa8a5d5c4c94f7c36292e790fc" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.356866 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rsvsc" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.454265 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh"] Feb 02 15:12:33 crc kubenswrapper[4869]: E0202 15:12:33.454662 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.454681 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.454859 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="04202cce-c3c1-483c-9d50-0fcf9a398094" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.455464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.457733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.458635 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.458963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.460520 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.461565 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.482596 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh"] Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.497142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.497295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.497466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.498681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.600655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.600797 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.601014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.601110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.606828 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.607339 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.609206 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.621612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:33 crc kubenswrapper[4869]: I0202 15:12:33.779202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:34 crc kubenswrapper[4869]: I0202 15:12:34.339734 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh"] Feb 02 15:12:34 crc kubenswrapper[4869]: I0202 15:12:34.367298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerStarted","Data":"e70061b2b29f5065618bdcae2caaf357d73c1f036f2f96b3530b6e8204f68716"} Feb 02 15:12:35 crc kubenswrapper[4869]: I0202 15:12:35.378350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerStarted","Data":"0a03f366dd3f2f3e065cb5cc8356200cdb3cd9ea6e0dfdc460968a29d9e33f18"} Feb 02 15:12:35 crc kubenswrapper[4869]: I0202 15:12:35.397190 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" podStartSLOduration=1.8771394940000001 podStartE2EDuration="2.397172097s" podCreationTimestamp="2026-02-02 15:12:33 +0000 UTC" firstStartedPulling="2026-02-02 15:12:34.349390812 +0000 UTC m=+2355.994027622" lastFinishedPulling="2026-02-02 15:12:34.869423405 +0000 UTC m=+2356.514060225" observedRunningTime="2026-02-02 15:12:35.395985118 +0000 UTC m=+2357.040621888" watchObservedRunningTime="2026-02-02 15:12:35.397172097 +0000 UTC m=+2357.041808867" Feb 02 15:12:39 crc kubenswrapper[4869]: I0202 15:12:39.416018 4869 generic.go:334] "Generic (PLEG): container finished" podID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerID="0a03f366dd3f2f3e065cb5cc8356200cdb3cd9ea6e0dfdc460968a29d9e33f18" exitCode=0 Feb 02 15:12:39 crc kubenswrapper[4869]: I0202 15:12:39.416184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerDied","Data":"0a03f366dd3f2f3e065cb5cc8356200cdb3cd9ea6e0dfdc460968a29d9e33f18"} Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.831068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.990239 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.990820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.990929 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.991139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") pod \"67cb4a99-39e2-4e00-88f5-748ad16cb874\" (UID: \"67cb4a99-39e2-4e00-88f5-748ad16cb874\") " Feb 02 15:12:40 crc kubenswrapper[4869]: I0202 15:12:40.998198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw" (OuterVolumeSpecName: "kube-api-access-plcrw") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "kube-api-access-plcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.000630 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph" (OuterVolumeSpecName: "ceph") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.024811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory" (OuterVolumeSpecName: "inventory") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.037122 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "67cb4a99-39e2-4e00-88f5-748ad16cb874" (UID: "67cb4a99-39e2-4e00-88f5-748ad16cb874"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094637 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plcrw\" (UniqueName: \"kubernetes.io/projected/67cb4a99-39e2-4e00-88f5-748ad16cb874-kube-api-access-plcrw\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094705 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094730 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.094750 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67cb4a99-39e2-4e00-88f5-748ad16cb874-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.444437 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" event={"ID":"67cb4a99-39e2-4e00-88f5-748ad16cb874","Type":"ContainerDied","Data":"e70061b2b29f5065618bdcae2caaf357d73c1f036f2f96b3530b6e8204f68716"} Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.444773 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e70061b2b29f5065618bdcae2caaf357d73c1f036f2f96b3530b6e8204f68716" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.444613 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.468550 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:41 crc kubenswrapper[4869]: E0202 15:12:41.471731 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.568339 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7"] Feb 02 15:12:41 crc kubenswrapper[4869]: E0202 15:12:41.568899 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.568940 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.569230 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cb4a99-39e2-4e00-88f5-748ad16cb874" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.570123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.573659 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.573944 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.576068 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.576359 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.576378 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.583266 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7"] Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.706816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.706888 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.706986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.707055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.809137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.814584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.815024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.815319 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.831875 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z97k7\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:41 crc kubenswrapper[4869]: I0202 15:12:41.890707 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:12:42 crc kubenswrapper[4869]: I0202 15:12:42.586291 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7"] Feb 02 15:12:43 crc kubenswrapper[4869]: I0202 15:12:43.493794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerStarted","Data":"759c19505a2a8a42dbbdd7a11a5d888506d9194c5d1b15b5a57a7a84f3e26fae"} Feb 02 15:12:43 crc kubenswrapper[4869]: I0202 15:12:43.494306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerStarted","Data":"6ee92e7c158290ca464863c53fe2dee50e2c9d4e8740b867bfefd6e98d2bfc5d"} Feb 02 15:12:43 crc kubenswrapper[4869]: I0202 15:12:43.502789 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" podStartSLOduration=2.067260616 podStartE2EDuration="2.502764836s" podCreationTimestamp="2026-02-02 15:12:41 +0000 UTC" firstStartedPulling="2026-02-02 15:12:42.590189297 +0000 UTC m=+2364.234826067" lastFinishedPulling="2026-02-02 15:12:43.025693517 +0000 UTC m=+2364.670330287" observedRunningTime="2026-02-02 15:12:43.498213825 +0000 UTC m=+2365.142850645" watchObservedRunningTime="2026-02-02 15:12:43.502764836 +0000 UTC m=+2365.147401606" Feb 02 15:12:53 crc kubenswrapper[4869]: I0202 15:12:53.464138 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:12:53 crc kubenswrapper[4869]: E0202 15:12:53.465100 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:04 crc kubenswrapper[4869]: I0202 15:13:04.463079 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:04 crc kubenswrapper[4869]: E0202 15:13:04.464479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:19 crc kubenswrapper[4869]: I0202 15:13:19.470038 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:19 crc kubenswrapper[4869]: E0202 15:13:19.471011 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:23 crc kubenswrapper[4869]: I0202 15:13:23.957846 4869 generic.go:334] "Generic (PLEG): container finished" podID="c94bd387-2568-4bea-a5be-0ff99e224681" containerID="759c19505a2a8a42dbbdd7a11a5d888506d9194c5d1b15b5a57a7a84f3e26fae" exitCode=0 Feb 02 15:13:23 crc kubenswrapper[4869]: I0202 15:13:23.957898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerDied","Data":"759c19505a2a8a42dbbdd7a11a5d888506d9194c5d1b15b5a57a7a84f3e26fae"} Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.451809 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.527747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.528073 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.528196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.528237 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") pod \"c94bd387-2568-4bea-a5be-0ff99e224681\" (UID: \"c94bd387-2568-4bea-a5be-0ff99e224681\") " Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.534704 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph" (OuterVolumeSpecName: "ceph") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.535576 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg" (OuterVolumeSpecName: "kube-api-access-7h8pg") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "kube-api-access-7h8pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.555851 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.568729 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory" (OuterVolumeSpecName: "inventory") pod "c94bd387-2568-4bea-a5be-0ff99e224681" (UID: "c94bd387-2568-4bea-a5be-0ff99e224681"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.630923 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.631437 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8pg\" (UniqueName: \"kubernetes.io/projected/c94bd387-2568-4bea-a5be-0ff99e224681-kube-api-access-7h8pg\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.631548 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.631651 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c94bd387-2568-4bea-a5be-0ff99e224681-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.980703 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" event={"ID":"c94bd387-2568-4bea-a5be-0ff99e224681","Type":"ContainerDied","Data":"6ee92e7c158290ca464863c53fe2dee50e2c9d4e8740b867bfefd6e98d2bfc5d"} Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.980757 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee92e7c158290ca464863c53fe2dee50e2c9d4e8740b867bfefd6e98d2bfc5d" Feb 02 15:13:25 crc kubenswrapper[4869]: I0202 15:13:25.980815 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z97k7" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.097518 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-v2kr2"] Feb 02 15:13:26 crc kubenswrapper[4869]: E0202 15:13:26.097945 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94bd387-2568-4bea-a5be-0ff99e224681" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.097965 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94bd387-2568-4bea-a5be-0ff99e224681" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.098174 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94bd387-2568-4bea-a5be-0ff99e224681" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.098831 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105217 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105433 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105639 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.105783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.107236 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.133374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-v2kr2"] Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243685 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.243771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.345563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.350566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.350986 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.353627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.372874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"ssh-known-hosts-edpm-deployment-v2kr2\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:26 crc kubenswrapper[4869]: I0202 15:13:26.422688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:27 crc kubenswrapper[4869]: I0202 15:13:27.000350 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-v2kr2"] Feb 02 15:13:28 crc kubenswrapper[4869]: I0202 15:13:27.999951 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerStarted","Data":"ed3824d3864ea5a68a0a844944e9bafe167d7822db38d412f8ef322577714f18"} Feb 02 15:13:30 crc kubenswrapper[4869]: I0202 15:13:30.029176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerStarted","Data":"08e1c40b2e7846b53264c3b23a65d32033fecad9b3eae45135d7df8ce84b7913"} Feb 02 15:13:30 crc kubenswrapper[4869]: I0202 15:13:30.066397 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" podStartSLOduration=2.336521741 podStartE2EDuration="4.06636739s" podCreationTimestamp="2026-02-02 15:13:26 +0000 UTC" firstStartedPulling="2026-02-02 15:13:27.007785667 +0000 UTC m=+2408.652422437" lastFinishedPulling="2026-02-02 15:13:28.737631316 +0000 UTC m=+2410.382268086" observedRunningTime="2026-02-02 15:13:30.058286012 +0000 UTC m=+2411.702922832" watchObservedRunningTime="2026-02-02 15:13:30.06636739 +0000 UTC m=+2411.711004200" Feb 02 15:13:33 crc kubenswrapper[4869]: I0202 15:13:33.481729 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:33 crc kubenswrapper[4869]: E0202 15:13:33.483233 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:38 crc kubenswrapper[4869]: I0202 15:13:38.110646 4869 generic.go:334] "Generic (PLEG): container finished" podID="3d624d16-2868-4154-a700-18e0cebe9357" containerID="08e1c40b2e7846b53264c3b23a65d32033fecad9b3eae45135d7df8ce84b7913" exitCode=0 Feb 02 15:13:38 crc kubenswrapper[4869]: I0202 15:13:38.110713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerDied","Data":"08e1c40b2e7846b53264c3b23a65d32033fecad9b3eae45135d7df8ce84b7913"} Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.542332 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654426 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654588 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.654751 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") pod \"3d624d16-2868-4154-a700-18e0cebe9357\" (UID: \"3d624d16-2868-4154-a700-18e0cebe9357\") " Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.663146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph" (OuterVolumeSpecName: "ceph") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.670530 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m" (OuterVolumeSpecName: "kube-api-access-ptm7m") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "kube-api-access-ptm7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.693981 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.701485 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d624d16-2868-4154-a700-18e0cebe9357" (UID: "3d624d16-2868-4154-a700-18e0cebe9357"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758219 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptm7m\" (UniqueName: \"kubernetes.io/projected/3d624d16-2868-4154-a700-18e0cebe9357-kube-api-access-ptm7m\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758540 4869 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758664 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:39 crc kubenswrapper[4869]: I0202 15:13:39.758775 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3d624d16-2868-4154-a700-18e0cebe9357-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.157945 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" event={"ID":"3d624d16-2868-4154-a700-18e0cebe9357","Type":"ContainerDied","Data":"ed3824d3864ea5a68a0a844944e9bafe167d7822db38d412f8ef322577714f18"} Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.157997 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3824d3864ea5a68a0a844944e9bafe167d7822db38d412f8ef322577714f18" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.158052 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-v2kr2" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.237755 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll"] Feb 02 15:13:40 crc kubenswrapper[4869]: E0202 15:13:40.238970 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d624d16-2868-4154-a700-18e0cebe9357" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.238999 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d624d16-2868-4154-a700-18e0cebe9357" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.239193 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d624d16-2868-4154-a700-18e0cebe9357" containerName="ssh-known-hosts-edpm-deployment" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.240098 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.243002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.243219 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.243886 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.244022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.244091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.245810 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll"] Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393368 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.393529 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.496700 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.501184 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.502625 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.503620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.519404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lnnll\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:40 crc kubenswrapper[4869]: I0202 15:13:40.610269 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:41 crc kubenswrapper[4869]: I0202 15:13:41.215679 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll"] Feb 02 15:13:42 crc kubenswrapper[4869]: I0202 15:13:42.177467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerStarted","Data":"1a95623b36d22362083338868e5acb8f7d45c23a0142c51aa658536f6263aa2b"} Feb 02 15:13:42 crc kubenswrapper[4869]: I0202 15:13:42.177894 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerStarted","Data":"c3ac866d9a007493767fa28660cab10ef1d367d0e5d2eaa4ec0b49c766bef778"} Feb 02 15:13:42 crc kubenswrapper[4869]: I0202 15:13:42.204036 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" podStartSLOduration=1.732455223 podStartE2EDuration="2.204017726s" podCreationTimestamp="2026-02-02 15:13:40 +0000 UTC" firstStartedPulling="2026-02-02 15:13:41.217672399 +0000 UTC m=+2422.862309169" lastFinishedPulling="2026-02-02 15:13:41.689234902 +0000 UTC m=+2423.333871672" observedRunningTime="2026-02-02 15:13:42.198537982 +0000 UTC m=+2423.843174752" watchObservedRunningTime="2026-02-02 15:13:42.204017726 +0000 UTC m=+2423.848654496" Feb 02 15:13:47 crc kubenswrapper[4869]: I0202 15:13:47.463067 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:13:47 crc kubenswrapper[4869]: E0202 15:13:47.463804 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:13:51 crc kubenswrapper[4869]: I0202 15:13:51.268532 4869 generic.go:334] "Generic (PLEG): container finished" podID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerID="1a95623b36d22362083338868e5acb8f7d45c23a0142c51aa658536f6263aa2b" exitCode=0 Feb 02 15:13:51 crc kubenswrapper[4869]: I0202 15:13:51.269190 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerDied","Data":"1a95623b36d22362083338868e5acb8f7d45c23a0142c51aa658536f6263aa2b"} Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.686487 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808549 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808608 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808904 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.808995 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") pod \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\" (UID: \"4b9e0145-82e1-4dde-a4d2-d17e482d01b7\") " Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.818824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph" (OuterVolumeSpecName: "ceph") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.821467 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm" (OuterVolumeSpecName: "kube-api-access-vw4hm") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "kube-api-access-vw4hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.856149 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory" (OuterVolumeSpecName: "inventory") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.866081 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4b9e0145-82e1-4dde-a4d2-d17e482d01b7" (UID: "4b9e0145-82e1-4dde-a4d2-d17e482d01b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.910785 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.910974 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.911060 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:52 crc kubenswrapper[4869]: I0202 15:13:52.911160 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw4hm\" (UniqueName: \"kubernetes.io/projected/4b9e0145-82e1-4dde-a4d2-d17e482d01b7-kube-api-access-vw4hm\") on node \"crc\" DevicePath \"\"" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.285201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" event={"ID":"4b9e0145-82e1-4dde-a4d2-d17e482d01b7","Type":"ContainerDied","Data":"c3ac866d9a007493767fa28660cab10ef1d367d0e5d2eaa4ec0b49c766bef778"} Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.285240 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ac866d9a007493767fa28660cab10ef1d367d0e5d2eaa4ec0b49c766bef778" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.285263 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lnnll" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.521545 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97"] Feb 02 15:13:53 crc kubenswrapper[4869]: E0202 15:13:53.522336 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.522384 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.522788 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9e0145-82e1-4dde-a4d2-d17e482d01b7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.524232 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.528642 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.529097 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.529532 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.529797 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.530258 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.555509 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97"] Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.627524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.627876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.627986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.628339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.731510 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.731698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.731894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.732004 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.740831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.741405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.743309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.773988 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:53 crc kubenswrapper[4869]: I0202 15:13:53.877169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:13:54 crc kubenswrapper[4869]: I0202 15:13:54.269859 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97"] Feb 02 15:13:54 crc kubenswrapper[4869]: I0202 15:13:54.294596 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerStarted","Data":"342ed92adcce5b144f2ee266e86695c4606bd9853d88bede3eb67bb1e01d4da3"} Feb 02 15:13:55 crc kubenswrapper[4869]: I0202 15:13:55.311061 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerStarted","Data":"dd4eb40a25a63694253db80af6c7246ae78d3e8e3f770e2c96c6a5985aa11028"} Feb 02 15:13:55 crc kubenswrapper[4869]: I0202 15:13:55.341714 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" podStartSLOduration=1.906779507 podStartE2EDuration="2.341685702s" podCreationTimestamp="2026-02-02 15:13:53 +0000 UTC" firstStartedPulling="2026-02-02 15:13:54.274200715 +0000 UTC m=+2435.918837485" lastFinishedPulling="2026-02-02 15:13:54.70910687 +0000 UTC m=+2436.353743680" observedRunningTime="2026-02-02 15:13:55.337761036 +0000 UTC m=+2436.982397856" watchObservedRunningTime="2026-02-02 15:13:55.341685702 +0000 UTC m=+2436.986322472" Feb 02 15:14:02 crc kubenswrapper[4869]: I0202 15:14:02.462530 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:02 crc kubenswrapper[4869]: E0202 15:14:02.463379 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:04 crc kubenswrapper[4869]: I0202 15:14:04.405254 4869 generic.go:334] "Generic (PLEG): container finished" podID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerID="dd4eb40a25a63694253db80af6c7246ae78d3e8e3f770e2c96c6a5985aa11028" exitCode=0 Feb 02 15:14:04 crc kubenswrapper[4869]: I0202 15:14:04.405343 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerDied","Data":"dd4eb40a25a63694253db80af6c7246ae78d3e8e3f770e2c96c6a5985aa11028"} Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.858601 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.911932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.912002 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.912201 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.912259 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") pod \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\" (UID: \"9ef6ee1c-f8bc-4060-8922-945b20187dfb\") " Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.919477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph" (OuterVolumeSpecName: "ceph") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.922343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d" (OuterVolumeSpecName: "kube-api-access-rvk2d") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "kube-api-access-rvk2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.942100 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory" (OuterVolumeSpecName: "inventory") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:05 crc kubenswrapper[4869]: I0202 15:14:05.954598 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ef6ee1c-f8bc-4060-8922-945b20187dfb" (UID: "9ef6ee1c-f8bc-4060-8922-945b20187dfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014163 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014407 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014471 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvk2d\" (UniqueName: \"kubernetes.io/projected/9ef6ee1c-f8bc-4060-8922-945b20187dfb-kube-api-access-rvk2d\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.014533 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ef6ee1c-f8bc-4060-8922-945b20187dfb-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.433324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" event={"ID":"9ef6ee1c-f8bc-4060-8922-945b20187dfb","Type":"ContainerDied","Data":"342ed92adcce5b144f2ee266e86695c4606bd9853d88bede3eb67bb1e01d4da3"} Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.433406 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342ed92adcce5b144f2ee266e86695c4606bd9853d88bede3eb67bb1e01d4da3" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.433367 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.643027 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g"] Feb 02 15:14:06 crc kubenswrapper[4869]: E0202 15:14:06.643527 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.643556 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.643824 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef6ee1c-f8bc-4060-8922-945b20187dfb" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.644712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.648064 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.649137 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.649794 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.650142 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.650407 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.650644 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.651067 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.651333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.672602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g"] Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727818 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727863 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.727992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728022 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728170 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.728225 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.829853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.830546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.838164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.839670 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.839990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.841146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.841308 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.841725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.842000 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.846035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.847065 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.862133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.866789 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.866940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.868768 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zd67g\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:06 crc kubenswrapper[4869]: I0202 15:14:06.977712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:07 crc kubenswrapper[4869]: I0202 15:14:07.357905 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g"] Feb 02 15:14:07 crc kubenswrapper[4869]: I0202 15:14:07.441451 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerStarted","Data":"6cae2a815e4d258fb152f3c130db09c9f71494f2b17ad3fd0ad5350edc8cab28"} Feb 02 15:14:08 crc kubenswrapper[4869]: I0202 15:14:08.460259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerStarted","Data":"8a0a8792cdc68cd74560abc9c9fdc7ede2e8dec06d4c5bfe6331c1a371e82428"} Feb 02 15:14:08 crc kubenswrapper[4869]: I0202 15:14:08.498619 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" podStartSLOduration=2.088591687 podStartE2EDuration="2.49859959s" podCreationTimestamp="2026-02-02 15:14:06 +0000 UTC" firstStartedPulling="2026-02-02 15:14:07.379258162 +0000 UTC m=+2449.023894932" lastFinishedPulling="2026-02-02 15:14:07.789266065 +0000 UTC m=+2449.433902835" observedRunningTime="2026-02-02 15:14:08.495103615 +0000 UTC m=+2450.139740455" watchObservedRunningTime="2026-02-02 15:14:08.49859959 +0000 UTC m=+2450.143236360" Feb 02 15:14:14 crc kubenswrapper[4869]: I0202 15:14:14.463786 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:14 crc kubenswrapper[4869]: E0202 15:14:14.464652 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:25 crc kubenswrapper[4869]: I0202 15:14:25.463088 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:25 crc kubenswrapper[4869]: E0202 15:14:25.464475 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:38 crc kubenswrapper[4869]: I0202 15:14:38.750327 4869 generic.go:334] "Generic (PLEG): container finished" podID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerID="8a0a8792cdc68cd74560abc9c9fdc7ede2e8dec06d4c5bfe6331c1a371e82428" exitCode=0 Feb 02 15:14:38 crc kubenswrapper[4869]: I0202 15:14:38.750409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerDied","Data":"8a0a8792cdc68cd74560abc9c9fdc7ede2e8dec06d4c5bfe6331c1a371e82428"} Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.226474 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372299 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.372957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373085 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373147 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.373279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") pod \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\" (UID: \"1cfd609a-5580-47a7-bb6d-afc564ca64d4\") " Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.379811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.379934 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.381565 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.381562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382426 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.382606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.383889 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2" (OuterVolumeSpecName: "kube-api-access-9w4h2") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "kube-api-access-9w4h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.385214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.386835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph" (OuterVolumeSpecName: "ceph") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.416448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.418490 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory" (OuterVolumeSpecName: "inventory") pod "1cfd609a-5580-47a7-bb6d-afc564ca64d4" (UID: "1cfd609a-5580-47a7-bb6d-afc564ca64d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.462725 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:40 crc kubenswrapper[4869]: E0202 15:14:40.462997 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475894 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475968 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475984 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.475999 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476013 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476027 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476043 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476056 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476067 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476079 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4h2\" (UniqueName: \"kubernetes.io/projected/1cfd609a-5580-47a7-bb6d-afc564ca64d4-kube-api-access-9w4h2\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476090 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476101 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.476111 4869 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd609a-5580-47a7-bb6d-afc564ca64d4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.771027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" event={"ID":"1cfd609a-5580-47a7-bb6d-afc564ca64d4","Type":"ContainerDied","Data":"6cae2a815e4d258fb152f3c130db09c9f71494f2b17ad3fd0ad5350edc8cab28"} Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.771074 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cae2a815e4d258fb152f3c130db09c9f71494f2b17ad3fd0ad5350edc8cab28" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.771122 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zd67g" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.940826 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r"] Feb 02 15:14:40 crc kubenswrapper[4869]: E0202 15:14:40.941881 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.942026 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.942308 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfd609a-5580-47a7-bb6d-afc564ca64d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.943198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.947278 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.947643 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.947744 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.948287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.948546 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:14:40 crc kubenswrapper[4869]: I0202 15:14:40.965372 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r"] Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.090872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.091269 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.091546 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.091631 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193626 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.193833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.203266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.204567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.207978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.224972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.271214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:41 crc kubenswrapper[4869]: I0202 15:14:41.786362 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r"] Feb 02 15:14:42 crc kubenswrapper[4869]: I0202 15:14:42.789381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerStarted","Data":"67ab939e61080d26360214528db25bd4d74ad68a7acfb34933b81476a785f9c5"} Feb 02 15:14:42 crc kubenswrapper[4869]: I0202 15:14:42.789436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerStarted","Data":"13253c94f81bfeddbdc2d05dd9ed224b396ab5bf978bd268c048992fa8ab6e1d"} Feb 02 15:14:42 crc kubenswrapper[4869]: I0202 15:14:42.812265 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" podStartSLOduration=2.400760352 podStartE2EDuration="2.812218003s" podCreationTimestamp="2026-02-02 15:14:40 +0000 UTC" firstStartedPulling="2026-02-02 15:14:41.796411543 +0000 UTC m=+2483.441048323" lastFinishedPulling="2026-02-02 15:14:42.207869204 +0000 UTC m=+2483.852505974" observedRunningTime="2026-02-02 15:14:42.803734945 +0000 UTC m=+2484.448371725" watchObservedRunningTime="2026-02-02 15:14:42.812218003 +0000 UTC m=+2484.456854783" Feb 02 15:14:47 crc kubenswrapper[4869]: I0202 15:14:47.839076 4869 generic.go:334] "Generic (PLEG): container finished" podID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerID="67ab939e61080d26360214528db25bd4d74ad68a7acfb34933b81476a785f9c5" exitCode=0 Feb 02 15:14:47 crc kubenswrapper[4869]: I0202 15:14:47.839157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerDied","Data":"67ab939e61080d26360214528db25bd4d74ad68a7acfb34933b81476a785f9c5"} Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.294555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.390511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.390783 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.391572 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.391672 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") pod \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\" (UID: \"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d\") " Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.396548 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf" (OuterVolumeSpecName: "kube-api-access-qfxsf") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "kube-api-access-qfxsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.396862 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph" (OuterVolumeSpecName: "ceph") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.423020 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.429363 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory" (OuterVolumeSpecName: "inventory") pod "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" (UID: "89ab19c1-9bd6-4f8b-b295-aee078ee4b0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494284 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494321 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfxsf\" (UniqueName: \"kubernetes.io/projected/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-kube-api-access-qfxsf\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494333 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.494343 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/89ab19c1-9bd6-4f8b-b295-aee078ee4b0d-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.861709 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" event={"ID":"89ab19c1-9bd6-4f8b-b295-aee078ee4b0d","Type":"ContainerDied","Data":"13253c94f81bfeddbdc2d05dd9ed224b396ab5bf978bd268c048992fa8ab6e1d"} Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.861751 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13253c94f81bfeddbdc2d05dd9ed224b396ab5bf978bd268c048992fa8ab6e1d" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.861807 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.952783 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r"] Feb 02 15:14:49 crc kubenswrapper[4869]: E0202 15:14:49.954705 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.954736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.955011 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ab19c1-9bd6-4f8b-b295-aee078ee4b0d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.956253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.959175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.960284 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.961332 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.961712 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.962461 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.963050 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 15:14:49 crc kubenswrapper[4869]: I0202 15:14:49.968454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r"] Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.105951 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.106041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.106083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208373 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208525 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.208647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.209960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.212221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.212508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.213745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.219336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.226572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjq2r\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.294821 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:14:50 crc kubenswrapper[4869]: I0202 15:14:50.878804 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r"] Feb 02 15:14:51 crc kubenswrapper[4869]: I0202 15:14:51.884972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerStarted","Data":"cb482c559ab444f53af2ecfd711fbbc076264bbf3a03007a004bb5a9a70007ec"} Feb 02 15:14:51 crc kubenswrapper[4869]: I0202 15:14:51.885434 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerStarted","Data":"7abc890e08cd800cf1fb6fe7ea6576ca4b4aef2758ae10e37bf78f1a50af7996"} Feb 02 15:14:51 crc kubenswrapper[4869]: I0202 15:14:51.918133 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" podStartSLOduration=2.445159325 podStartE2EDuration="2.918112273s" podCreationTimestamp="2026-02-02 15:14:49 +0000 UTC" firstStartedPulling="2026-02-02 15:14:50.88672028 +0000 UTC m=+2492.531357050" lastFinishedPulling="2026-02-02 15:14:51.359673228 +0000 UTC m=+2493.004309998" observedRunningTime="2026-02-02 15:14:51.909966513 +0000 UTC m=+2493.554603283" watchObservedRunningTime="2026-02-02 15:14:51.918112273 +0000 UTC m=+2493.562749043" Feb 02 15:14:54 crc kubenswrapper[4869]: I0202 15:14:54.463308 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:14:54 crc kubenswrapper[4869]: E0202 15:14:54.464332 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.150768 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.152756 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.155173 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.155335 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.172848 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.216453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.216521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.216563 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.318807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.318861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.318890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.320181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.332854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.340522 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"collect-profiles-29500755-xwwrj\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.478335 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:00 crc kubenswrapper[4869]: I0202 15:15:00.993047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 15:15:01 crc kubenswrapper[4869]: I0202 15:15:01.997790 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerID="1ee657e7e391fb0be0a60133a3c2bc04a0767f387cf6cc279ee259f05131226f" exitCode=0 Feb 02 15:15:01 crc kubenswrapper[4869]: I0202 15:15:01.998248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" event={"ID":"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c","Type":"ContainerDied","Data":"1ee657e7e391fb0be0a60133a3c2bc04a0767f387cf6cc279ee259f05131226f"} Feb 02 15:15:01 crc kubenswrapper[4869]: I0202 15:15:01.998282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" event={"ID":"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c","Type":"ContainerStarted","Data":"82117ee2800615f38cf817041582a17d2015e04778d10023edf8baf4eeab0a02"} Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.396394 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.506982 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") pod \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.507086 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") pod \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.507259 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") pod \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\" (UID: \"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c\") " Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.508131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" (UID: "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.515060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz" (OuterVolumeSpecName: "kube-api-access-h54pz") pod "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" (UID: "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c"). InnerVolumeSpecName "kube-api-access-h54pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.515798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" (UID: "8d86c4a4-a435-4f57-9566-eaa1e74d1f5c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.609407 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.609472 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:15:03 crc kubenswrapper[4869]: I0202 15:15:03.609485 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h54pz\" (UniqueName: \"kubernetes.io/projected/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c-kube-api-access-h54pz\") on node \"crc\" DevicePath \"\"" Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.017445 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" event={"ID":"8d86c4a4-a435-4f57-9566-eaa1e74d1f5c","Type":"ContainerDied","Data":"82117ee2800615f38cf817041582a17d2015e04778d10023edf8baf4eeab0a02"} Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.017486 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82117ee2800615f38cf817041582a17d2015e04778d10023edf8baf4eeab0a02" Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.017555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj" Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.500262 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 15:15:04 crc kubenswrapper[4869]: I0202 15:15:04.509129 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500710-2vmgv"] Feb 02 15:15:05 crc kubenswrapper[4869]: I0202 15:15:05.472090 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab9815bf-1049-47c8-8eda-cf2602f2eb83" path="/var/lib/kubelet/pods/ab9815bf-1049-47c8-8eda-cf2602f2eb83/volumes" Feb 02 15:15:07 crc kubenswrapper[4869]: I0202 15:15:07.463188 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:07 crc kubenswrapper[4869]: E0202 15:15:07.463698 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:19 crc kubenswrapper[4869]: I0202 15:15:19.475047 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:19 crc kubenswrapper[4869]: E0202 15:15:19.476310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:30 crc kubenswrapper[4869]: I0202 15:15:30.467195 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:30 crc kubenswrapper[4869]: E0202 15:15:30.467950 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:30 crc kubenswrapper[4869]: I0202 15:15:30.530009 4869 scope.go:117] "RemoveContainer" containerID="e8f482a348a44d3e230e5a4713b952ada13938b6875563e11d356097cf18334f" Feb 02 15:15:43 crc kubenswrapper[4869]: I0202 15:15:43.463439 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:43 crc kubenswrapper[4869]: E0202 15:15:43.464885 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:15:55 crc kubenswrapper[4869]: I0202 15:15:55.463449 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:15:56 crc kubenswrapper[4869]: I0202 15:15:56.020375 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff"} Feb 02 15:16:01 crc kubenswrapper[4869]: I0202 15:16:01.069987 4869 generic.go:334] "Generic (PLEG): container finished" podID="72dccf63-f84a-41bb-a601-d67db9557b64" containerID="cb482c559ab444f53af2ecfd711fbbc076264bbf3a03007a004bb5a9a70007ec" exitCode=0 Feb 02 15:16:01 crc kubenswrapper[4869]: I0202 15:16:01.070074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerDied","Data":"cb482c559ab444f53af2ecfd711fbbc076264bbf3a03007a004bb5a9a70007ec"} Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.491189 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.569994 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570582 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570659 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.570733 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") pod \"72dccf63-f84a-41bb-a601-d67db9557b64\" (UID: \"72dccf63-f84a-41bb-a601-d67db9557b64\") " Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.580847 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.595106 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph" (OuterVolumeSpecName: "ceph") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.595214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2" (OuterVolumeSpecName: "kube-api-access-jhkw2") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "kube-api-access-jhkw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.600308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.601302 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory" (OuterVolumeSpecName: "inventory") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.601622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72dccf63-f84a-41bb-a601-d67db9557b64" (UID: "72dccf63-f84a-41bb-a601-d67db9557b64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673548 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673596 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkw2\" (UniqueName: \"kubernetes.io/projected/72dccf63-f84a-41bb-a601-d67db9557b64-kube-api-access-jhkw2\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673616 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673635 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673653 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72dccf63-f84a-41bb-a601-d67db9557b64-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:02 crc kubenswrapper[4869]: I0202 15:16:02.673671 4869 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72dccf63-f84a-41bb-a601-d67db9557b64-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.092806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" event={"ID":"72dccf63-f84a-41bb-a601-d67db9557b64","Type":"ContainerDied","Data":"7abc890e08cd800cf1fb6fe7ea6576ca4b4aef2758ae10e37bf78f1a50af7996"} Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.092852 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7abc890e08cd800cf1fb6fe7ea6576ca4b4aef2758ae10e37bf78f1a50af7996" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.092977 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjq2r" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.189418 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g"] Feb 02 15:16:03 crc kubenswrapper[4869]: E0202 15:16:03.190156 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerName="collect-profiles" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190188 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerName="collect-profiles" Feb 02 15:16:03 crc kubenswrapper[4869]: E0202 15:16:03.190234 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dccf63-f84a-41bb-a601-d67db9557b64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190252 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dccf63-f84a-41bb-a601-d67db9557b64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190629 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" containerName="collect-profiles" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.190670 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dccf63-f84a-41bb-a601-d67db9557b64" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.191720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.195363 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.195463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.195842 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.198624 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.198688 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.201215 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.201260 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.204469 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g"] Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287584 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287839 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287900 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.287968 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.389349 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.393890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.394700 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.397611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.397611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.399439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.402295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.407960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:03 crc kubenswrapper[4869]: I0202 15:16:03.517047 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:16:04 crc kubenswrapper[4869]: I0202 15:16:04.195491 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g"] Feb 02 15:16:04 crc kubenswrapper[4869]: I0202 15:16:04.196864 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:16:05 crc kubenswrapper[4869]: I0202 15:16:05.112002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerStarted","Data":"77227ab15c4e6f6027db0220f21c3ecbc1457b11d5434d1902eaae9f95ef32c9"} Feb 02 15:16:05 crc kubenswrapper[4869]: I0202 15:16:05.112492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerStarted","Data":"8d2984ec464ca86ed83beaded68c0b4de1fd280a2ba9f1825707b547eb063f6f"} Feb 02 15:16:05 crc kubenswrapper[4869]: I0202 15:16:05.146837 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" podStartSLOduration=1.670433555 podStartE2EDuration="2.146776336s" podCreationTimestamp="2026-02-02 15:16:03 +0000 UTC" firstStartedPulling="2026-02-02 15:16:04.196461482 +0000 UTC m=+2565.841098292" lastFinishedPulling="2026-02-02 15:16:04.672804263 +0000 UTC m=+2566.317441073" observedRunningTime="2026-02-02 15:16:05.135246224 +0000 UTC m=+2566.779883024" watchObservedRunningTime="2026-02-02 15:16:05.146776336 +0000 UTC m=+2566.791413146" Feb 02 15:17:00 crc kubenswrapper[4869]: I0202 15:17:00.697899 4869 generic.go:334] "Generic (PLEG): container finished" podID="cece8f41-7b97-43d1-b538-c09300006b15" containerID="77227ab15c4e6f6027db0220f21c3ecbc1457b11d5434d1902eaae9f95ef32c9" exitCode=0 Feb 02 15:17:00 crc kubenswrapper[4869]: I0202 15:17:00.698000 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerDied","Data":"77227ab15c4e6f6027db0220f21c3ecbc1457b11d5434d1902eaae9f95ef32c9"} Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.187462 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224671 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.224698 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") pod \"cece8f41-7b97-43d1-b538-c09300006b15\" (UID: \"cece8f41-7b97-43d1-b538-c09300006b15\") " Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.233087 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph" (OuterVolumeSpecName: "ceph") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.233114 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf" (OuterVolumeSpecName: "kube-api-access-srbhf") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "kube-api-access-srbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.236420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.257578 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory" (OuterVolumeSpecName: "inventory") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.258482 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.259341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.280787 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "cece8f41-7b97-43d1-b538-c09300006b15" (UID: "cece8f41-7b97-43d1-b538-c09300006b15"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326748 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326781 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326792 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326801 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326810 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srbhf\" (UniqueName: \"kubernetes.io/projected/cece8f41-7b97-43d1-b538-c09300006b15-kube-api-access-srbhf\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326819 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.326828 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cece8f41-7b97-43d1-b538-c09300006b15-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.719541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" event={"ID":"cece8f41-7b97-43d1-b538-c09300006b15","Type":"ContainerDied","Data":"8d2984ec464ca86ed83beaded68c0b4de1fd280a2ba9f1825707b547eb063f6f"} Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.719598 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2984ec464ca86ed83beaded68c0b4de1fd280a2ba9f1825707b547eb063f6f" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.719655 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.812325 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9"] Feb 02 15:17:02 crc kubenswrapper[4869]: E0202 15:17:02.813074 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cece8f41-7b97-43d1-b538-c09300006b15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.813101 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cece8f41-7b97-43d1-b538-c09300006b15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.813350 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cece8f41-7b97-43d1-b538-c09300006b15" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.814059 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818753 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818783 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818821 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.818926 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.819418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.819557 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.829763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9"] Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837274 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.837512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939744 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939869 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939928 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.939969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.944377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.944378 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.951503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.952024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.952607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:02 crc kubenswrapper[4869]: I0202 15:17:02.958250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:03 crc kubenswrapper[4869]: I0202 15:17:03.140618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:17:03 crc kubenswrapper[4869]: I0202 15:17:03.671439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9"] Feb 02 15:17:03 crc kubenswrapper[4869]: W0202 15:17:03.677324 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c45a4e_9fe0_4d8d_a74d_162a45a36d5e.slice/crio-a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f WatchSource:0}: Error finding container a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f: Status 404 returned error can't find the container with id a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f Feb 02 15:17:03 crc kubenswrapper[4869]: I0202 15:17:03.728536 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerStarted","Data":"a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f"} Feb 02 15:17:04 crc kubenswrapper[4869]: I0202 15:17:04.737715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerStarted","Data":"a1bcc83de6c8c3d6d8f0d46b65b7aea3a466ecc90ab2e07ea6784ad03b72f134"} Feb 02 15:17:04 crc kubenswrapper[4869]: I0202 15:17:04.755455 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" podStartSLOduration=2.330080433 podStartE2EDuration="2.755437914s" podCreationTimestamp="2026-02-02 15:17:02 +0000 UTC" firstStartedPulling="2026-02-02 15:17:03.680300239 +0000 UTC m=+2625.324937009" lastFinishedPulling="2026-02-02 15:17:04.1056577 +0000 UTC m=+2625.750294490" observedRunningTime="2026-02-02 15:17:04.753819725 +0000 UTC m=+2626.398456545" watchObservedRunningTime="2026-02-02 15:17:04.755437914 +0000 UTC m=+2626.400074684" Feb 02 15:17:11 crc kubenswrapper[4869]: I0202 15:17:11.948893 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:11 crc kubenswrapper[4869]: I0202 15:17:11.952028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:11 crc kubenswrapper[4869]: I0202 15:17:11.993438 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.023577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.023738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.023815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.125562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.125639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.125661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.126167 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.126311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.143859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"redhat-operators-88wj9\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.279586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:12 crc kubenswrapper[4869]: I0202 15:17:12.802019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:13 crc kubenswrapper[4869]: I0202 15:17:13.830708 4869 generic.go:334] "Generic (PLEG): container finished" podID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" exitCode=0 Feb 02 15:17:13 crc kubenswrapper[4869]: I0202 15:17:13.830841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea"} Feb 02 15:17:13 crc kubenswrapper[4869]: I0202 15:17:13.831816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerStarted","Data":"935e53aa74a8a25a69dce794297ca87892c29b09030ee86052fff3f55b981f1f"} Feb 02 15:17:15 crc kubenswrapper[4869]: I0202 15:17:15.853269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerStarted","Data":"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945"} Feb 02 15:17:16 crc kubenswrapper[4869]: I0202 15:17:16.865599 4869 generic.go:334] "Generic (PLEG): container finished" podID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" exitCode=0 Feb 02 15:17:16 crc kubenswrapper[4869]: I0202 15:17:16.865649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945"} Feb 02 15:17:17 crc kubenswrapper[4869]: I0202 15:17:17.886731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerStarted","Data":"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130"} Feb 02 15:17:17 crc kubenswrapper[4869]: I0202 15:17:17.911857 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-88wj9" podStartSLOduration=3.457839542 podStartE2EDuration="6.911835023s" podCreationTimestamp="2026-02-02 15:17:11 +0000 UTC" firstStartedPulling="2026-02-02 15:17:13.833198884 +0000 UTC m=+2635.477835654" lastFinishedPulling="2026-02-02 15:17:17.287194365 +0000 UTC m=+2638.931831135" observedRunningTime="2026-02-02 15:17:17.909532776 +0000 UTC m=+2639.554169546" watchObservedRunningTime="2026-02-02 15:17:17.911835023 +0000 UTC m=+2639.556471803" Feb 02 15:17:22 crc kubenswrapper[4869]: I0202 15:17:22.280152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:22 crc kubenswrapper[4869]: I0202 15:17:22.280825 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:23 crc kubenswrapper[4869]: I0202 15:17:23.341535 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-88wj9" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" probeResult="failure" output=< Feb 02 15:17:23 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:17:23 crc kubenswrapper[4869]: > Feb 02 15:17:32 crc kubenswrapper[4869]: I0202 15:17:32.347728 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:32 crc kubenswrapper[4869]: I0202 15:17:32.432554 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:32 crc kubenswrapper[4869]: I0202 15:17:32.595232 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.054535 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-88wj9" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" containerID="cri-o://07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" gracePeriod=2 Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.541875 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.726077 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") pod \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.726226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") pod \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.726322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") pod \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\" (UID: \"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2\") " Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.727625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities" (OuterVolumeSpecName: "utilities") pod "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" (UID: "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.734075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr" (OuterVolumeSpecName: "kube-api-access-5qnpr") pod "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" (UID: "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2"). InnerVolumeSpecName "kube-api-access-5qnpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.830364 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qnpr\" (UniqueName: \"kubernetes.io/projected/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-kube-api-access-5qnpr\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.830479 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.862291 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" (UID: "8fdd7095-96af-49a9-bce3-cd07fbc6f1f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:17:34 crc kubenswrapper[4869]: I0202 15:17:34.932814 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068143 4869 generic.go:334] "Generic (PLEG): container finished" podID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" exitCode=0 Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068212 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-88wj9" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130"} Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.068945 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-88wj9" event={"ID":"8fdd7095-96af-49a9-bce3-cd07fbc6f1f2","Type":"ContainerDied","Data":"935e53aa74a8a25a69dce794297ca87892c29b09030ee86052fff3f55b981f1f"} Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.069000 4869 scope.go:117] "RemoveContainer" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.093357 4869 scope.go:117] "RemoveContainer" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.119003 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.121851 4869 scope.go:117] "RemoveContainer" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.135694 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-88wj9"] Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.175311 4869 scope.go:117] "RemoveContainer" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" Feb 02 15:17:35 crc kubenswrapper[4869]: E0202 15:17:35.176140 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130\": container with ID starting with 07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130 not found: ID does not exist" containerID="07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.176190 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130"} err="failed to get container status \"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130\": rpc error: code = NotFound desc = could not find container \"07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130\": container with ID starting with 07cb72fca8f08797e888bba7c5206762a6c232678610fcd286b1fd0a91357130 not found: ID does not exist" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.176220 4869 scope.go:117] "RemoveContainer" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" Feb 02 15:17:35 crc kubenswrapper[4869]: E0202 15:17:35.176855 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945\": container with ID starting with 77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945 not found: ID does not exist" containerID="77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.176917 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945"} err="failed to get container status \"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945\": rpc error: code = NotFound desc = could not find container \"77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945\": container with ID starting with 77369f089eb3335231bd106bb2de811516231a0d1135e1670feb3ff663648945 not found: ID does not exist" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.177072 4869 scope.go:117] "RemoveContainer" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" Feb 02 15:17:35 crc kubenswrapper[4869]: E0202 15:17:35.177515 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea\": container with ID starting with c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea not found: ID does not exist" containerID="c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.177552 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea"} err="failed to get container status \"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea\": rpc error: code = NotFound desc = could not find container \"c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea\": container with ID starting with c54e42423c57079cc365d5abb21b201c5375c2819cac53dd042e3b41411652ea not found: ID does not exist" Feb 02 15:17:35 crc kubenswrapper[4869]: I0202 15:17:35.482306 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" path="/var/lib/kubelet/pods/8fdd7095-96af-49a9-bce3-cd07fbc6f1f2/volumes" Feb 02 15:18:15 crc kubenswrapper[4869]: I0202 15:18:15.304312 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:18:15 crc kubenswrapper[4869]: I0202 15:18:15.304983 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:18:45 crc kubenswrapper[4869]: I0202 15:18:45.304785 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:18:45 crc kubenswrapper[4869]: I0202 15:18:45.305499 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.304279 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.304876 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.305003 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.305949 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:19:15 crc kubenswrapper[4869]: I0202 15:19:15.306049 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff" gracePeriod=600 Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106000 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff" exitCode=0 Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff"} Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106383 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb"} Feb 02 15:19:16 crc kubenswrapper[4869]: I0202 15:19:16.106412 4869 scope.go:117] "RemoveContainer" containerID="4c60cc292e232360ce82950e8c083aa8d87d97d44a4ad0b2e8ec3f1b9d9a0df4" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.928307 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:20:49 crc kubenswrapper[4869]: E0202 15:20:49.929206 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-utilities" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929221 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-utilities" Feb 02 15:20:49 crc kubenswrapper[4869]: E0202 15:20:49.929239 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-content" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929247 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="extract-content" Feb 02 15:20:49 crc kubenswrapper[4869]: E0202 15:20:49.929262 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929271 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.929511 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fdd7095-96af-49a9-bce3-cd07fbc6f1f2" containerName="registry-server" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.931372 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:49 crc kubenswrapper[4869]: I0202 15:20:49.953901 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.029663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.029771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.029851 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.131513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.131642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.131786 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.132079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.132401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.156182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"certified-operators-2fvl2\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.284524 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:20:50 crc kubenswrapper[4869]: I0202 15:20:50.782014 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:20:51 crc kubenswrapper[4869]: I0202 15:20:51.083407 4869 generic.go:334] "Generic (PLEG): container finished" podID="5d60644a-3c45-4853-b628-4e9517c65940" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" exitCode=0 Feb 02 15:20:51 crc kubenswrapper[4869]: I0202 15:20:51.083446 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72"} Feb 02 15:20:51 crc kubenswrapper[4869]: I0202 15:20:51.083470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerStarted","Data":"5f377289cdedfb216d3a3b90c052283a8e116cbfd4faa6e26b39c99e0747b88e"} Feb 02 15:20:52 crc kubenswrapper[4869]: I0202 15:20:52.100502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerStarted","Data":"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00"} Feb 02 15:20:53 crc kubenswrapper[4869]: I0202 15:20:53.112217 4869 generic.go:334] "Generic (PLEG): container finished" podID="5d60644a-3c45-4853-b628-4e9517c65940" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" exitCode=0 Feb 02 15:20:53 crc kubenswrapper[4869]: I0202 15:20:53.112326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00"} Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.122794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerStarted","Data":"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b"} Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.147999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2fvl2" podStartSLOduration=2.7270488090000002 podStartE2EDuration="5.147966853s" podCreationTimestamp="2026-02-02 15:20:49 +0000 UTC" firstStartedPulling="2026-02-02 15:20:51.085618837 +0000 UTC m=+2852.730255647" lastFinishedPulling="2026-02-02 15:20:53.506536921 +0000 UTC m=+2855.151173691" observedRunningTime="2026-02-02 15:20:54.145445322 +0000 UTC m=+2855.790082092" watchObservedRunningTime="2026-02-02 15:20:54.147966853 +0000 UTC m=+2855.792603673" Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.899365 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.901823 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:54 crc kubenswrapper[4869]: I0202 15:20:54.919048 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.021820 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.021917 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.021977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.123827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124151 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.124859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.146153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"community-operators-4g924\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.224732 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:20:55 crc kubenswrapper[4869]: I0202 15:20:55.755850 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:20:55 crc kubenswrapper[4869]: W0202 15:20:55.766517 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3add0bf_cfd3_4829_bfb6_e72ca53eab05.slice/crio-422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d WatchSource:0}: Error finding container 422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d: Status 404 returned error can't find the container with id 422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d Feb 02 15:20:56 crc kubenswrapper[4869]: I0202 15:20:56.141315 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" exitCode=0 Feb 02 15:20:56 crc kubenswrapper[4869]: I0202 15:20:56.141690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8"} Feb 02 15:20:56 crc kubenswrapper[4869]: I0202 15:20:56.141726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerStarted","Data":"422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d"} Feb 02 15:20:57 crc kubenswrapper[4869]: I0202 15:20:57.157215 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerStarted","Data":"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04"} Feb 02 15:20:58 crc kubenswrapper[4869]: I0202 15:20:58.178802 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" exitCode=0 Feb 02 15:20:58 crc kubenswrapper[4869]: I0202 15:20:58.178848 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04"} Feb 02 15:20:59 crc kubenswrapper[4869]: I0202 15:20:59.191411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerStarted","Data":"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c"} Feb 02 15:20:59 crc kubenswrapper[4869]: I0202 15:20:59.217216 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4g924" podStartSLOduration=2.747626486 podStartE2EDuration="5.217191282s" podCreationTimestamp="2026-02-02 15:20:54 +0000 UTC" firstStartedPulling="2026-02-02 15:20:56.143488336 +0000 UTC m=+2857.788125096" lastFinishedPulling="2026-02-02 15:20:58.613053082 +0000 UTC m=+2860.257689892" observedRunningTime="2026-02-02 15:20:59.211079863 +0000 UTC m=+2860.855716633" watchObservedRunningTime="2026-02-02 15:20:59.217191282 +0000 UTC m=+2860.861828072" Feb 02 15:21:00 crc kubenswrapper[4869]: I0202 15:21:00.285133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:00 crc kubenswrapper[4869]: I0202 15:21:00.285208 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:00 crc kubenswrapper[4869]: I0202 15:21:00.339695 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:01 crc kubenswrapper[4869]: I0202 15:21:01.261213 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:01 crc kubenswrapper[4869]: I0202 15:21:01.895000 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.226296 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2fvl2" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" containerID="cri-o://eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" gracePeriod=2 Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.742018 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.792432 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") pod \"5d60644a-3c45-4853-b628-4e9517c65940\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.792618 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") pod \"5d60644a-3c45-4853-b628-4e9517c65940\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.792666 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") pod \"5d60644a-3c45-4853-b628-4e9517c65940\" (UID: \"5d60644a-3c45-4853-b628-4e9517c65940\") " Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.793562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities" (OuterVolumeSpecName: "utilities") pod "5d60644a-3c45-4853-b628-4e9517c65940" (UID: "5d60644a-3c45-4853-b628-4e9517c65940"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.801903 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c" (OuterVolumeSpecName: "kube-api-access-x7c7c") pod "5d60644a-3c45-4853-b628-4e9517c65940" (UID: "5d60644a-3c45-4853-b628-4e9517c65940"). InnerVolumeSpecName "kube-api-access-x7c7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.858690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d60644a-3c45-4853-b628-4e9517c65940" (UID: "5d60644a-3c45-4853-b628-4e9517c65940"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.894491 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7c7c\" (UniqueName: \"kubernetes.io/projected/5d60644a-3c45-4853-b628-4e9517c65940-kube-api-access-x7c7c\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.894546 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:03 crc kubenswrapper[4869]: I0202 15:21:03.894566 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d60644a-3c45-4853-b628-4e9517c65940-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240233 4869 generic.go:334] "Generic (PLEG): container finished" podID="5d60644a-3c45-4853-b628-4e9517c65940" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" exitCode=0 Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b"} Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240383 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2fvl2" event={"ID":"5d60644a-3c45-4853-b628-4e9517c65940","Type":"ContainerDied","Data":"5f377289cdedfb216d3a3b90c052283a8e116cbfd4faa6e26b39c99e0747b88e"} Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240425 4869 scope.go:117] "RemoveContainer" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.240427 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2fvl2" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.267424 4869 scope.go:117] "RemoveContainer" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.301209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.310948 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2fvl2"] Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.321665 4869 scope.go:117] "RemoveContainer" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.353971 4869 scope.go:117] "RemoveContainer" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" Feb 02 15:21:04 crc kubenswrapper[4869]: E0202 15:21:04.354797 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b\": container with ID starting with eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b not found: ID does not exist" containerID="eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.354832 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b"} err="failed to get container status \"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b\": rpc error: code = NotFound desc = could not find container \"eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b\": container with ID starting with eef103c2610f683ca2fa90ff12c07a0d70651eac2530fe7f0d095548ebabdc4b not found: ID does not exist" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.354851 4869 scope.go:117] "RemoveContainer" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" Feb 02 15:21:04 crc kubenswrapper[4869]: E0202 15:21:04.355788 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00\": container with ID starting with f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00 not found: ID does not exist" containerID="f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.355815 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00"} err="failed to get container status \"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00\": rpc error: code = NotFound desc = could not find container \"f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00\": container with ID starting with f8014dc42bf834a6641f349eb5c23ded0a7d9356655bf12cec86befad25dca00 not found: ID does not exist" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.355828 4869 scope.go:117] "RemoveContainer" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" Feb 02 15:21:04 crc kubenswrapper[4869]: E0202 15:21:04.356320 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72\": container with ID starting with f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72 not found: ID does not exist" containerID="f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72" Feb 02 15:21:04 crc kubenswrapper[4869]: I0202 15:21:04.356345 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72"} err="failed to get container status \"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72\": rpc error: code = NotFound desc = could not find container \"f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72\": container with ID starting with f17945b6b9dd2e3f9d135167e002539123ed3ec0636b3931f43258e586320b72 not found: ID does not exist" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.226795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.227436 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.317768 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.405717 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:05 crc kubenswrapper[4869]: I0202 15:21:05.480862 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d60644a-3c45-4853-b628-4e9517c65940" path="/var/lib/kubelet/pods/5d60644a-3c45-4853-b628-4e9517c65940/volumes" Feb 02 15:21:07 crc kubenswrapper[4869]: I0202 15:21:07.699396 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:21:07 crc kubenswrapper[4869]: I0202 15:21:07.699971 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4g924" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" containerID="cri-o://a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" gracePeriod=2 Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.167375 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282130 4869 generic.go:334] "Generic (PLEG): container finished" podID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" exitCode=0 Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g924" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c"} Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g924" event={"ID":"b3add0bf-cfd3-4829-bfb6-e72ca53eab05","Type":"ContainerDied","Data":"422b72cfe09dad0c4581a2485663235ffb13695ccd75c57b25d343bb782e112d"} Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.282227 4869 scope.go:117] "RemoveContainer" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.283440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") pod \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.283585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") pod \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.283659 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") pod \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\" (UID: \"b3add0bf-cfd3-4829-bfb6-e72ca53eab05\") " Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.284672 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities" (OuterVolumeSpecName: "utilities") pod "b3add0bf-cfd3-4829-bfb6-e72ca53eab05" (UID: "b3add0bf-cfd3-4829-bfb6-e72ca53eab05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.288738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9" (OuterVolumeSpecName: "kube-api-access-6g7p9") pod "b3add0bf-cfd3-4829-bfb6-e72ca53eab05" (UID: "b3add0bf-cfd3-4829-bfb6-e72ca53eab05"). InnerVolumeSpecName "kube-api-access-6g7p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.337448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3add0bf-cfd3-4829-bfb6-e72ca53eab05" (UID: "b3add0bf-cfd3-4829-bfb6-e72ca53eab05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.354780 4869 scope.go:117] "RemoveContainer" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.372638 4869 scope.go:117] "RemoveContainer" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.386407 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.386442 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.386453 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g7p9\" (UniqueName: \"kubernetes.io/projected/b3add0bf-cfd3-4829-bfb6-e72ca53eab05-kube-api-access-6g7p9\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.407703 4869 scope.go:117] "RemoveContainer" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" Feb 02 15:21:08 crc kubenswrapper[4869]: E0202 15:21:08.408222 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c\": container with ID starting with a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c not found: ID does not exist" containerID="a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408274 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c"} err="failed to get container status \"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c\": rpc error: code = NotFound desc = could not find container \"a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c\": container with ID starting with a96637efd604fdf37baa717a3056e7806bc065fc4082f9404c95cbcc5b6cc95c not found: ID does not exist" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408300 4869 scope.go:117] "RemoveContainer" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" Feb 02 15:21:08 crc kubenswrapper[4869]: E0202 15:21:08.408715 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04\": container with ID starting with 55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04 not found: ID does not exist" containerID="55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408743 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04"} err="failed to get container status \"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04\": rpc error: code = NotFound desc = could not find container \"55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04\": container with ID starting with 55700b9313c63a4b374af52e15fbf84d4b78e303f1efdfec8ee8bfc6038cab04 not found: ID does not exist" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.408762 4869 scope.go:117] "RemoveContainer" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" Feb 02 15:21:08 crc kubenswrapper[4869]: E0202 15:21:08.409083 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8\": container with ID starting with af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8 not found: ID does not exist" containerID="af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.409102 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8"} err="failed to get container status \"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8\": rpc error: code = NotFound desc = could not find container \"af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8\": container with ID starting with af33900cdc8e5646bf513aab1f278fb3b5ee40cf584ea729f88076205b140aa8 not found: ID does not exist" Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.620483 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:21:08 crc kubenswrapper[4869]: I0202 15:21:08.667647 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4g924"] Feb 02 15:21:09 crc kubenswrapper[4869]: I0202 15:21:09.478106 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" path="/var/lib/kubelet/pods/b3add0bf-cfd3-4829-bfb6-e72ca53eab05/volumes" Feb 02 15:21:15 crc kubenswrapper[4869]: I0202 15:21:15.304614 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:21:15 crc kubenswrapper[4869]: I0202 15:21:15.305412 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:21:26 crc kubenswrapper[4869]: I0202 15:21:26.494563 4869 generic.go:334] "Generic (PLEG): container finished" podID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerID="a1bcc83de6c8c3d6d8f0d46b65b7aea3a466ecc90ab2e07ea6784ad03b72f134" exitCode=0 Feb 02 15:21:26 crc kubenswrapper[4869]: I0202 15:21:26.495230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerDied","Data":"a1bcc83de6c8c3d6d8f0d46b65b7aea3a466ecc90ab2e07ea6784ad03b72f134"} Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.012620 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106736 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106803 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.106862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") pod \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\" (UID: \"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e\") " Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.113165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph" (OuterVolumeSpecName: "ceph") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.115156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.115164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f" (OuterVolumeSpecName: "kube-api-access-9px9f") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "kube-api-access-9px9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.132843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.139239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.145483 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory" (OuterVolumeSpecName: "inventory") pod "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" (UID: "83c45a4e-9fe0-4d8d-a74d-162a45a36d5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209391 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209444 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209467 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209487 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9px9f\" (UniqueName: \"kubernetes.io/projected/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-kube-api-access-9px9f\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209508 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.209528 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c45a4e-9fe0-4d8d-a74d-162a45a36d5e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.515121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" event={"ID":"83c45a4e-9fe0-4d8d-a74d-162a45a36d5e","Type":"ContainerDied","Data":"a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f"} Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.515173 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a8fd6dd4e4633cb5fdaf4ac3822fd4a1a62e5ac441f60a39f809bbcfef0f7f" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.515207 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656436 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk"] Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656845 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656869 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656886 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656895 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656928 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.656938 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-utilities" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.656993 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657007 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.657034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657043 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.657096 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657106 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: E0202 15:21:28.657127 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657135 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="extract-content" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657511 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c45a4e-9fe0-4d8d-a74d-162a45a36d5e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657548 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d60644a-3c45-4853-b628-4e9517c65940" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.657568 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3add0bf-cfd3-4829-bfb6-e72ca53eab05" containerName="registry-server" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.658434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.660721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nhnd5" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.661021 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662012 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662170 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662324 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662453 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662497 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.662754 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.677105 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk"] Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.829985 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830190 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.830367 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.931969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932066 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932167 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932230 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.932582 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.933309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.934066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.937348 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.939017 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.940868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.941826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.946258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.946314 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.947174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.958870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.969295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:28 crc kubenswrapper[4869]: I0202 15:21:28.977431 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:21:29 crc kubenswrapper[4869]: I0202 15:21:29.559187 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk"] Feb 02 15:21:29 crc kubenswrapper[4869]: I0202 15:21:29.567408 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:21:30 crc kubenswrapper[4869]: I0202 15:21:30.535771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerStarted","Data":"5f5c174b338b5c46b501b4e35b795946f7906c1879c7b0cdc3ebf6b01cbaf2ff"} Feb 02 15:21:30 crc kubenswrapper[4869]: I0202 15:21:30.536145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerStarted","Data":"c9c943b42281a5f7a9cffcb44ae79a79b00120da70449b3e3ad985f6375d8b56"} Feb 02 15:21:30 crc kubenswrapper[4869]: I0202 15:21:30.559918 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" podStartSLOduration=2.111595725 podStartE2EDuration="2.55988109s" podCreationTimestamp="2026-02-02 15:21:28 +0000 UTC" firstStartedPulling="2026-02-02 15:21:29.567172598 +0000 UTC m=+2891.211809368" lastFinishedPulling="2026-02-02 15:21:30.015457953 +0000 UTC m=+2891.660094733" observedRunningTime="2026-02-02 15:21:30.553873993 +0000 UTC m=+2892.198510763" watchObservedRunningTime="2026-02-02 15:21:30.55988109 +0000 UTC m=+2892.204517860" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.248703 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.253274 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.272811 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.430861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.431066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.431364 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.533603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.533681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.533780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.534167 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.534218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.576252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"redhat-marketplace-ttqqd\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:42 crc kubenswrapper[4869]: I0202 15:21:42.581955 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.097204 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:43 crc kubenswrapper[4869]: W0202 15:21:43.114788 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6464971e_d1e4_4e00_b758_17fb7448a055.slice/crio-2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e WatchSource:0}: Error finding container 2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e: Status 404 returned error can't find the container with id 2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.657841 4869 generic.go:334] "Generic (PLEG): container finished" podID="6464971e-d1e4-4e00-b758-17fb7448a055" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" exitCode=0 Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.657956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f"} Feb 02 15:21:43 crc kubenswrapper[4869]: I0202 15:21:43.658292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerStarted","Data":"2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e"} Feb 02 15:21:44 crc kubenswrapper[4869]: I0202 15:21:44.668271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerStarted","Data":"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733"} Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.305103 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.305470 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.685588 4869 generic.go:334] "Generic (PLEG): container finished" podID="6464971e-d1e4-4e00-b758-17fb7448a055" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" exitCode=0 Feb 02 15:21:45 crc kubenswrapper[4869]: I0202 15:21:45.685669 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733"} Feb 02 15:21:46 crc kubenswrapper[4869]: I0202 15:21:46.696830 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerStarted","Data":"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5"} Feb 02 15:21:46 crc kubenswrapper[4869]: I0202 15:21:46.723800 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ttqqd" podStartSLOduration=2.279986923 podStartE2EDuration="4.723773098s" podCreationTimestamp="2026-02-02 15:21:42 +0000 UTC" firstStartedPulling="2026-02-02 15:21:43.659713138 +0000 UTC m=+2905.304349938" lastFinishedPulling="2026-02-02 15:21:46.103499313 +0000 UTC m=+2907.748136113" observedRunningTime="2026-02-02 15:21:46.721285917 +0000 UTC m=+2908.365922727" watchObservedRunningTime="2026-02-02 15:21:46.723773098 +0000 UTC m=+2908.368409888" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.583043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.583751 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.647670 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.808393 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:52 crc kubenswrapper[4869]: I0202 15:21:52.891298 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:54 crc kubenswrapper[4869]: I0202 15:21:54.780239 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ttqqd" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" containerID="cri-o://aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" gracePeriod=2 Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.283606 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.397694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") pod \"6464971e-d1e4-4e00-b758-17fb7448a055\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.397889 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") pod \"6464971e-d1e4-4e00-b758-17fb7448a055\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.398110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") pod \"6464971e-d1e4-4e00-b758-17fb7448a055\" (UID: \"6464971e-d1e4-4e00-b758-17fb7448a055\") " Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.399117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities" (OuterVolumeSpecName: "utilities") pod "6464971e-d1e4-4e00-b758-17fb7448a055" (UID: "6464971e-d1e4-4e00-b758-17fb7448a055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.403362 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk" (OuterVolumeSpecName: "kube-api-access-wwnmk") pod "6464971e-d1e4-4e00-b758-17fb7448a055" (UID: "6464971e-d1e4-4e00-b758-17fb7448a055"). InnerVolumeSpecName "kube-api-access-wwnmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.428035 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6464971e-d1e4-4e00-b758-17fb7448a055" (UID: "6464971e-d1e4-4e00-b758-17fb7448a055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.500379 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.500433 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6464971e-d1e4-4e00-b758-17fb7448a055-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.500456 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwnmk\" (UniqueName: \"kubernetes.io/projected/6464971e-d1e4-4e00-b758-17fb7448a055-kube-api-access-wwnmk\") on node \"crc\" DevicePath \"\"" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795039 4869 generic.go:334] "Generic (PLEG): container finished" podID="6464971e-d1e4-4e00-b758-17fb7448a055" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" exitCode=0 Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5"} Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795229 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttqqd" event={"ID":"6464971e-d1e4-4e00-b758-17fb7448a055","Type":"ContainerDied","Data":"2a2abe68d2d038cc80c8b88a82fc9b398b160dc5599aa647a1f5b01507ba7d7e"} Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.795262 4869 scope.go:117] "RemoveContainer" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.797048 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttqqd" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.832102 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.842546 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttqqd"] Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.842616 4869 scope.go:117] "RemoveContainer" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.877316 4869 scope.go:117] "RemoveContainer" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.941330 4869 scope.go:117] "RemoveContainer" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" Feb 02 15:21:55 crc kubenswrapper[4869]: E0202 15:21:55.941780 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5\": container with ID starting with aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5 not found: ID does not exist" containerID="aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.941856 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5"} err="failed to get container status \"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5\": rpc error: code = NotFound desc = could not find container \"aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5\": container with ID starting with aedc6cee8fd01ddd7693b311024ae21a3f12d26d8bee1ae068e7a973adf7eec5 not found: ID does not exist" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.941899 4869 scope.go:117] "RemoveContainer" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" Feb 02 15:21:55 crc kubenswrapper[4869]: E0202 15:21:55.946880 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733\": container with ID starting with e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733 not found: ID does not exist" containerID="e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.947080 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733"} err="failed to get container status \"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733\": rpc error: code = NotFound desc = could not find container \"e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733\": container with ID starting with e4663e0a0b73bae81c7a0abb65b5e0467810385a8e73acf276bbcd1bf22e8733 not found: ID does not exist" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.947126 4869 scope.go:117] "RemoveContainer" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" Feb 02 15:21:55 crc kubenswrapper[4869]: E0202 15:21:55.947677 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f\": container with ID starting with 0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f not found: ID does not exist" containerID="0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f" Feb 02 15:21:55 crc kubenswrapper[4869]: I0202 15:21:55.947730 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f"} err="failed to get container status \"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f\": rpc error: code = NotFound desc = could not find container \"0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f\": container with ID starting with 0369cb6939a2b597ae3fbada12cdae4c2e0f372d167333bb6563843bcdef177f not found: ID does not exist" Feb 02 15:21:57 crc kubenswrapper[4869]: I0202 15:21:57.473020 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" path="/var/lib/kubelet/pods/6464971e-d1e4-4e00-b758-17fb7448a055/volumes" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.304006 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.306466 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.306752 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.308159 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:22:15 crc kubenswrapper[4869]: I0202 15:22:15.308416 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" gracePeriod=600 Feb 02 15:22:15 crc kubenswrapper[4869]: E0202 15:22:15.433471 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.009161 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" exitCode=0 Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.009231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb"} Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.009281 4869 scope.go:117] "RemoveContainer" containerID="d1c21cffc067fe1e07b927f212e7b8cbe355b9aed345baf6b6e65dce05f639ff" Feb 02 15:22:16 crc kubenswrapper[4869]: I0202 15:22:16.012795 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:16 crc kubenswrapper[4869]: E0202 15:22:16.013245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:28 crc kubenswrapper[4869]: I0202 15:22:28.463507 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:28 crc kubenswrapper[4869]: E0202 15:22:28.464411 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:41 crc kubenswrapper[4869]: I0202 15:22:41.462476 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:41 crc kubenswrapper[4869]: E0202 15:22:41.463436 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:22:56 crc kubenswrapper[4869]: I0202 15:22:56.464161 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:22:56 crc kubenswrapper[4869]: E0202 15:22:56.465637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:11 crc kubenswrapper[4869]: I0202 15:23:11.462451 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:11 crc kubenswrapper[4869]: E0202 15:23:11.463239 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:24 crc kubenswrapper[4869]: I0202 15:23:24.463176 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:24 crc kubenswrapper[4869]: E0202 15:23:24.464254 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:39 crc kubenswrapper[4869]: I0202 15:23:39.469004 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:39 crc kubenswrapper[4869]: E0202 15:23:39.469875 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:50 crc kubenswrapper[4869]: I0202 15:23:50.467366 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:23:50 crc kubenswrapper[4869]: E0202 15:23:50.469268 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:23:52 crc kubenswrapper[4869]: I0202 15:23:52.959108 4869 generic.go:334] "Generic (PLEG): container finished" podID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerID="5f5c174b338b5c46b501b4e35b795946f7906c1879c7b0cdc3ebf6b01cbaf2ff" exitCode=0 Feb 02 15:23:52 crc kubenswrapper[4869]: I0202 15:23:52.959225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerDied","Data":"5f5c174b338b5c46b501b4e35b795946f7906c1879c7b0cdc3ebf6b01cbaf2ff"} Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.465221 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550936 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.550984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551170 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551206 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.551282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") pod \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\" (UID: \"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e\") " Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.564159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds" (OuterVolumeSpecName: "kube-api-access-5g4ds") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "kube-api-access-5g4ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.571368 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph" (OuterVolumeSpecName: "ceph") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.577278 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.585557 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.590039 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.599795 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.601962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory" (OuterVolumeSpecName: "inventory") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.604575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.605045 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.605569 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.615396 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" (UID: "196ff3ae-e676-4d40-9de4-ea6ad23a1e5e"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653768 4869 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653813 4869 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653829 4869 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653843 4869 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653856 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653868 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653880 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653892 4869 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653903 4869 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653935 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g4ds\" (UniqueName: \"kubernetes.io/projected/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-kube-api-access-5g4ds\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.653947 4869 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/196ff3ae-e676-4d40-9de4-ea6ad23a1e5e-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.981257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" event={"ID":"196ff3ae-e676-4d40-9de4-ea6ad23a1e5e","Type":"ContainerDied","Data":"c9c943b42281a5f7a9cffcb44ae79a79b00120da70449b3e3ad985f6375d8b56"} Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.981314 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c943b42281a5f7a9cffcb44ae79a79b00120da70449b3e3ad985f6375d8b56" Feb 02 15:23:54 crc kubenswrapper[4869]: I0202 15:23:54.981392 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk" Feb 02 15:24:01 crc kubenswrapper[4869]: I0202 15:24:01.462789 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:01 crc kubenswrapper[4869]: E0202 15:24:01.463721 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928147 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928803 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928821 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928838 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-utilities" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928844 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-utilities" Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928864 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928871 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 15:24:08 crc kubenswrapper[4869]: E0202 15:24:08.928886 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-content" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.928891 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="extract-content" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.929094 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="196ff3ae-e676-4d40-9de4-ea6ad23a1e5e" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.929117 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6464971e-d1e4-4e00-b758-17fb7448a055" containerName="registry-server" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.930021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.932422 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.932990 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953009 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953027 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953095 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9gh\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-kube-api-access-6f9gh\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953560 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-run\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:08 crc kubenswrapper[4869]: I0202 15:24:08.953921 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.016987 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.018687 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.025627 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.039479 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-scripts\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055212 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-dev\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055257 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-run\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055297 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-ceph\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055378 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks8h4\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-kube-api-access-ks8h4\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055417 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-sys\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055436 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055481 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055495 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9gh\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-kube-api-access-6f9gh\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-run\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055576 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055591 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055730 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055750 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.055881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-run\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.056098 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.056888 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-sys\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057439 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-dev\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057491 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057507 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.057607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.058605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.061717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.062623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.063209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.063713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.076994 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.089278 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9gh\" (UniqueName: \"kubernetes.io/projected/e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37-kube-api-access-6f9gh\") pod \"cinder-volume-volume1-0\" (UID: \"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37\") " pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-run\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157407 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157451 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-scripts\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-dev\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-ceph\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks8h4\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-kube-api-access-ks8h4\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-sys\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-run\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.157794 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158513 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-dev\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.158972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.159044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.159192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-sys\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.161820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.161829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-scripts\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.162323 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-ceph\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.162389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.162859 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.178599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks8h4\" (UniqueName: \"kubernetes.io/projected/ffb18e2a-67e6-4932-97fb-dd57b66f6c93-kube-api-access-ks8h4\") pod \"cinder-backup-0\" (UID: \"ffb18e2a-67e6-4932-97fb-dd57b66f6c93\") " pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.249942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.341721 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.494371 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.496294 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.496405 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.529383 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.530688 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.535843 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.546336 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.548216 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550306 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550461 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550541 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.550628 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7fldw" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572951 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.572970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573000 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.573083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.600451 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.615055 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676610 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.676866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.677570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.678141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.678384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.681251 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.681894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.683547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.693206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.703628 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.706816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"manila-db-create-2vhkx\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.723362 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"manila-d921-account-create-update-shfv2\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.734504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"horizon-74c696d745-m9v9m\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.741257 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.775968 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.777468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778649 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.778847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.783364 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q8bdk" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.783367 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.784080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.785480 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.811407 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.832340 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.870530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.880694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.882078 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.882210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.882723 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.884029 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.903252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.947469 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"horizon-6d66c5779c-pggjz\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.947540 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.948858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.959782 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.960065 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.964552 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982135 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982212 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982240 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982278 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982376 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982412 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:09 crc kubenswrapper[4869]: I0202 15:24:09.982432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.016337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.037543 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085484 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085664 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.085804 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.086563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.089730 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.090111 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.091798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.093447 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.101824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.102834 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.104136 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.122263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.133530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.160301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37","Type":"ContainerStarted","Data":"413a71d83cad7dbb27c3eedc69feaa178bfef6776e9a6f53bd15629dd0ae3e78"} Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187246 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187306 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.187468 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.190313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.190668 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.192765 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.193605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.195674 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.200230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.200890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.214552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.242551 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.321238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.399243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.515811 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.704634 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:24:10 crc kubenswrapper[4869]: W0202 15:24:10.715649 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b666475_dc9a_41e9_b087_b2042c2dd80f.slice/crio-0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a WatchSource:0}: Error finding container 0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a: Status 404 returned error can't find the container with id 0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.717414 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.734391 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.888152 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 02 15:24:10 crc kubenswrapper[4869]: I0202 15:24:10.968418 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:11 crc kubenswrapper[4869]: W0202 15:24:11.027368 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94981156_d105_463b_90e1_db9b2dbbb853.slice/crio-57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9 WatchSource:0}: Error finding container 57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9: Status 404 returned error can't find the container with id 57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9 Feb 02 15:24:11 crc kubenswrapper[4869]: W0202 15:24:11.028850 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffb18e2a_67e6_4932_97fb_dd57b66f6c93.slice/crio-94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d WatchSource:0}: Error finding container 94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d: Status 404 returned error can't find the container with id 94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.174707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerStarted","Data":"1e3835ffee852cf7e2e461dbfd0c1bce873454f7dd01eb6e5bb8f0bd42308327"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.180189 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerStarted","Data":"e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.180244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerStarted","Data":"0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.184402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerStarted","Data":"57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.187836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerStarted","Data":"2db55e6d04f2819c1e06bcde8e721cfa825f9601f520cf4e3f6565c2aaa1d4aa"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.189170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ffb18e2a-67e6-4932-97fb-dd57b66f6c93","Type":"ContainerStarted","Data":"94be20283077f482426506b2c97be1d382fa38575982bc195623e0a24412fb0d"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.195300 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerID="f6a65d674c18b4d91e1a4a5378741c663bb46842c68ee5b840ab49a144aef022" exitCode=0 Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.195361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d921-account-create-update-shfv2" event={"ID":"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607","Type":"ContainerDied","Data":"f6a65d674c18b4d91e1a4a5378741c663bb46842c68ee5b840ab49a144aef022"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.195395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d921-account-create-update-shfv2" event={"ID":"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607","Type":"ContainerStarted","Data":"0488d82d62aae3b848d73ce68527757f78ac4e24690c4bfdbb4078b5c06546b4"} Feb 02 15:24:11 crc kubenswrapper[4869]: I0202 15:24:11.783663 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.170701 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.205693 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.223621 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.225494 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.227127 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.237267 4869 generic.go:334] "Generic (PLEG): container finished" podID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerID="e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156" exitCode=0 Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.237335 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerDied","Data":"e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.239139 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.276187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerStarted","Data":"1afcccd94d0ae4b407fdf8e32cfa845c1df5d114a1c85b8851a8082600f3c817"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.280574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37","Type":"ContainerStarted","Data":"f9d4129a4b135e4d9ca0c9026d3686e0e559273d968fe5246bfa69cd577729e7"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.280612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37","Type":"ContainerStarted","Data":"6abb4698df2580c10205409433ce54feee7c83af065a970024a448e0ecc48940"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.289179 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.290753 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerStarted","Data":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.324035 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.331211 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.3074167230000002 podStartE2EDuration="4.331180956s" podCreationTimestamp="2026-02-02 15:24:08 +0000 UTC" firstStartedPulling="2026-02-02 15:24:10.061188275 +0000 UTC m=+3051.705825045" lastFinishedPulling="2026-02-02 15:24:11.084952508 +0000 UTC m=+3052.729589278" observedRunningTime="2026-02-02 15:24:12.310029938 +0000 UTC m=+3053.954666708" watchObservedRunningTime="2026-02-02 15:24:12.331180956 +0000 UTC m=+3053.975817716" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340349 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340405 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.340446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.355762 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bc7747c5b-j78w2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.357354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.370578 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bc7747c5b-j78w2"] Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.442008 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsdh\" (UniqueName: \"kubernetes.io/projected/8714c728-0089-451b-8335-ab32ef8c39ac-kube-api-access-pcsdh\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.442363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-scripts\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444849 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-combined-ca-bundle\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.444978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.445081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-config-data\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-secret-key\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8714c728-0089-451b-8335-ab32ef8c39ac-logs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-tls-certs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.446883 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.447069 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.448264 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.454604 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.455739 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.469538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.469643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.469806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.479679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"horizon-74748d768-vjhn2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548700 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsdh\" (UniqueName: \"kubernetes.io/projected/8714c728-0089-451b-8335-ab32ef8c39ac-kube-api-access-pcsdh\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548763 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-scripts\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-combined-ca-bundle\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548817 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-config-data\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-secret-key\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8714c728-0089-451b-8335-ab32ef8c39ac-logs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.548919 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-tls-certs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.551157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-scripts\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.551667 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8714c728-0089-451b-8335-ab32ef8c39ac-config-data\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.551666 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8714c728-0089-451b-8335-ab32ef8c39ac-logs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.557263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-secret-key\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.557621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.594491 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-horizon-tls-certs\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.613568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsdh\" (UniqueName: \"kubernetes.io/projected/8714c728-0089-451b-8335-ab32ef8c39ac-kube-api-access-pcsdh\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.632234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8714c728-0089-451b-8335-ab32ef8c39ac-combined-ca-bundle\") pod \"horizon-6bc7747c5b-j78w2\" (UID: \"8714c728-0089-451b-8335-ab32ef8c39ac\") " pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.688989 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.692218 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.729744 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.753983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") pod \"5b666475-dc9a-41e9-b087-b2042c2dd80f\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.754349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") pod \"5b666475-dc9a-41e9-b087-b2042c2dd80f\" (UID: \"5b666475-dc9a-41e9-b087-b2042c2dd80f\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.755031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b666475-dc9a-41e9-b087-b2042c2dd80f" (UID: "5b666475-dc9a-41e9-b087-b2042c2dd80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.759450 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28" (OuterVolumeSpecName: "kube-api-access-48b28") pod "5b666475-dc9a-41e9-b087-b2042c2dd80f" (UID: "5b666475-dc9a-41e9-b087-b2042c2dd80f"). InnerVolumeSpecName "kube-api-access-48b28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856205 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") pod \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") pod \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\" (UID: \"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607\") " Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856799 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48b28\" (UniqueName: \"kubernetes.io/projected/5b666475-dc9a-41e9-b087-b2042c2dd80f-kube-api-access-48b28\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.856820 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b666475-dc9a-41e9-b087-b2042c2dd80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.857324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" (UID: "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.864273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266" (OuterVolumeSpecName: "kube-api-access-67266") pod "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" (UID: "8d70d6af-0f1a-40d1-b0aa-8896b8fcd607"). InnerVolumeSpecName "kube-api-access-67266". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.959238 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:12 crc kubenswrapper[4869]: I0202 15:24:12.959269 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67266\" (UniqueName: \"kubernetes.io/projected/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607-kube-api-access-67266\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.249944 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bc7747c5b-j78w2"] Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.262560 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.301154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2vhkx" event={"ID":"5b666475-dc9a-41e9-b087-b2042c2dd80f","Type":"ContainerDied","Data":"0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.301196 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce6ef32ac6a219a34149b463874172bef6e528b983cc5bd5684522a4483d43a" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.301164 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2vhkx" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.317159 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerStarted","Data":"bb317fe37d1fca98ae0b5bc915c94ff30a5b109bb554ebf2814b1106d864e8a6"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.318762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bc7747c5b-j78w2" event={"ID":"8714c728-0089-451b-8335-ab32ef8c39ac","Type":"ContainerStarted","Data":"88b020470bc6e0c38a73e136a5b1e9a2c001f26244bfbec5264f95f6e6f2b31f"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.322190 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerStarted","Data":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.335260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerStarted","Data":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.335448 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" containerID="cri-o://69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" gracePeriod=30 Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.335499 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" containerID="cri-o://71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" gracePeriod=30 Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.349354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ffb18e2a-67e6-4932-97fb-dd57b66f6c93","Type":"ContainerStarted","Data":"2af17ad0c7dda96215a13938bcace47860a44d057efe2c08c33d929939e077f9"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.352050 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d921-account-create-update-shfv2" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.352058 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d921-account-create-update-shfv2" event={"ID":"8d70d6af-0f1a-40d1-b0aa-8896b8fcd607","Type":"ContainerDied","Data":"0488d82d62aae3b848d73ce68527757f78ac4e24690c4bfdbb4078b5c06546b4"} Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.352121 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0488d82d62aae3b848d73ce68527757f78ac4e24690c4bfdbb4078b5c06546b4" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.373416 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.373396179 podStartE2EDuration="4.373396179s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:13.361326563 +0000 UTC m=+3055.005963343" watchObservedRunningTime="2026-02-02 15:24:13.373396179 +0000 UTC m=+3055.018032949" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.466357 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:13 crc kubenswrapper[4869]: E0202 15:24:13.467065 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:13 crc kubenswrapper[4869]: I0202 15:24:13.956861 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092010 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092343 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.092389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") pod \"94981156-d105-463b-90e1-db9b2dbbb853\" (UID: \"94981156-d105-463b-90e1-db9b2dbbb853\") " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.094203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.094218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs" (OuterVolumeSpecName: "logs") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.098647 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.099264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8" (OuterVolumeSpecName: "kube-api-access-2kdq8") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "kube-api-access-2kdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.099378 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts" (OuterVolumeSpecName: "scripts") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.101960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph" (OuterVolumeSpecName: "ceph") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.127329 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.178178 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data" (OuterVolumeSpecName: "config-data") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206805 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206836 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206850 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206857 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206867 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206874 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206882 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94981156-d105-463b-90e1-db9b2dbbb853-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.206891 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kdq8\" (UniqueName: \"kubernetes.io/projected/94981156-d105-463b-90e1-db9b2dbbb853-kube-api-access-2kdq8\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.229707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94981156-d105-463b-90e1-db9b2dbbb853" (UID: "94981156-d105-463b-90e1-db9b2dbbb853"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.233678 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.251346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.308673 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.308709 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94981156-d105-463b-90e1-db9b2dbbb853-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.371304 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" containerID="cri-o://420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" gracePeriod=30 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.371604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerStarted","Data":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.371884 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" containerID="cri-o://44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" gracePeriod=30 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374757 4869 generic.go:334] "Generic (PLEG): container finished" podID="94981156-d105-463b-90e1-db9b2dbbb853" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" exitCode=0 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374788 4869 generic.go:334] "Generic (PLEG): container finished" podID="94981156-d105-463b-90e1-db9b2dbbb853" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" exitCode=143 Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374861 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerDied","Data":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerDied","Data":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374900 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94981156-d105-463b-90e1-db9b2dbbb853","Type":"ContainerDied","Data":"57a6e95fab1f39c6e095166cd5ba2a8ab99f4835d1e7eb1dd672b0694a98f5f9"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.374931 4869 scope.go:117] "RemoveContainer" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.375068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.399392 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ffb18e2a-67e6-4932-97fb-dd57b66f6c93","Type":"ContainerStarted","Data":"971ea371362a10335e31b3b88f5517683d06a7c5420335425391975c903d9b60"} Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.402733 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.402707287 podStartE2EDuration="5.402707287s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:14.400128294 +0000 UTC m=+3056.044765054" watchObservedRunningTime="2026-02-02 15:24:14.402707287 +0000 UTC m=+3056.047344057" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.430071 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.8020294329999995 podStartE2EDuration="6.430051067s" podCreationTimestamp="2026-02-02 15:24:08 +0000 UTC" firstStartedPulling="2026-02-02 15:24:11.030384392 +0000 UTC m=+3052.675021162" lastFinishedPulling="2026-02-02 15:24:12.658406026 +0000 UTC m=+3054.303042796" observedRunningTime="2026-02-02 15:24:14.424502451 +0000 UTC m=+3056.069139251" watchObservedRunningTime="2026-02-02 15:24:14.430051067 +0000 UTC m=+3056.074687837" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.452844 4869 scope.go:117] "RemoveContainer" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.465381 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.482095 4869 scope.go:117] "RemoveContainer" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.484127 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": container with ID starting with 71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc not found: ID does not exist" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.484176 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} err="failed to get container status \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": rpc error: code = NotFound desc = could not find container \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": container with ID starting with 71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.484207 4869 scope.go:117] "RemoveContainer" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.485006 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": container with ID starting with 69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4 not found: ID does not exist" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485033 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} err="failed to get container status \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": rpc error: code = NotFound desc = could not find container \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": container with ID starting with 69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4 not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485050 4869 scope.go:117] "RemoveContainer" containerID="71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485184 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485726 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc"} err="failed to get container status \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": rpc error: code = NotFound desc = could not find container \"71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc\": container with ID starting with 71e5d11c801997f923fb663e82c7aea80988a8ad0836c553ac39e4a0a314c7cc not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485750 4869 scope.go:117] "RemoveContainer" containerID="69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.485999 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4"} err="failed to get container status \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": rpc error: code = NotFound desc = could not find container \"69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4\": container with ID starting with 69a9fe55699ca12ad424e01214ea12d3cfcb2fff3a17551cdabf08dfa8c894e4 not found: ID does not exist" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.513899 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514651 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514677 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514700 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerName="mariadb-database-create" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514709 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerName="mariadb-database-create" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514734 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerName="mariadb-account-create-update" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514745 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerName="mariadb-account-create-update" Feb 02 15:24:14 crc kubenswrapper[4869]: E0202 15:24:14.514767 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.514774 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515094 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" containerName="mariadb-database-create" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515126 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-httpd" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515147 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="94981156-d105-463b-90e1-db9b2dbbb853" containerName="glance-log" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.515160 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" containerName="mariadb-account-create-update" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.516831 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.520218 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.520718 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.522797 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623534 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623657 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-logs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-ceph\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623927 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfphl\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-kube-api-access-cfphl\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.623971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727236 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-ceph\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfphl\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-kube-api-access-cfphl\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-logs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727502 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.727573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.730422 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.734123 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.734394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6439a406-db54-421d-b5c7-5911b35cfda3-logs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.734478 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-scripts\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.735006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-ceph\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.736523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.740300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.748798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6439a406-db54-421d-b5c7-5911b35cfda3-config-data\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.754938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfphl\" (UniqueName: \"kubernetes.io/projected/6439a406-db54-421d-b5c7-5911b35cfda3-kube-api-access-cfphl\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.771886 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6439a406-db54-421d-b5c7-5911b35cfda3\") " pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.874440 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.992321 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:24:14 crc kubenswrapper[4869]: I0202 15:24:14.999416 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.010173 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.011135 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-gtk54" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.011891 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046715 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046852 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.046981 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.151300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.158617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.164960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.176044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.188571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"manila-db-sync-jf2x2\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.263098 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.346095 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353610 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353838 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.353933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\" (UID: \"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22\") " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.354714 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.355365 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs" (OuterVolumeSpecName: "logs") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.359558 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts" (OuterVolumeSpecName: "scripts") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.359821 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.365154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph" (OuterVolumeSpecName: "ceph") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.379515 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb" (OuterVolumeSpecName: "kube-api-access-9rjnb") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "kube-api-access-9rjnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.394450 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.434335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442682 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" exitCode=0 Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442708 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" exitCode=143 Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442768 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerDied","Data":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerDied","Data":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f2cc1fd-d5ce-45c5-a396-88cf344d5f22","Type":"ContainerDied","Data":"1afcccd94d0ae4b407fdf8e32cfa845c1df5d114a1c85b8851a8082600f3c817"} Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.442845 4869 scope.go:117] "RemoveContainer" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.443073 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456173 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456196 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456218 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456229 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456238 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456247 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456257 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.456266 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rjnb\" (UniqueName: \"kubernetes.io/projected/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-kube-api-access-9rjnb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.486291 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94981156-d105-463b-90e1-db9b2dbbb853" path="/var/lib/kubelet/pods/94981156-d105-463b-90e1-db9b2dbbb853/volumes" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.494435 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.504719 4869 scope.go:117] "RemoveContainer" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.520073 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data" (OuterVolumeSpecName: "config-data") pod "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" (UID: "8f2cc1fd-d5ce-45c5-a396-88cf344d5f22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.542763 4869 scope.go:117] "RemoveContainer" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.543179 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": container with ID starting with 44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a not found: ID does not exist" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543212 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} err="failed to get container status \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": rpc error: code = NotFound desc = could not find container \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": container with ID starting with 44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543241 4869 scope.go:117] "RemoveContainer" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.543448 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": container with ID starting with 420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61 not found: ID does not exist" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543470 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} err="failed to get container status \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": rpc error: code = NotFound desc = could not find container \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": container with ID starting with 420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61 not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543487 4869 scope.go:117] "RemoveContainer" containerID="44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543666 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a"} err="failed to get container status \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": rpc error: code = NotFound desc = could not find container \"44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a\": container with ID starting with 44829d1545ab15bd5fcbad9089129d81d3c0212827fec6abde2f79914113089a not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.543696 4869 scope.go:117] "RemoveContainer" containerID="420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.544018 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61"} err="failed to get container status \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": rpc error: code = NotFound desc = could not find container \"420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61\": container with ID starting with 420bafc320f311435d1910cdfa63b3031c890dcf30fd59ba66c4f1fe3d5d5a61 not found: ID does not exist" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.549086 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.558470 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.558502 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.788419 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.796735 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.811369 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.815569 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.815594 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" Feb 02 15:24:15 crc kubenswrapper[4869]: E0202 15:24:15.815608 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.815616 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.816138 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-log" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.816152 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" containerName="glance-httpd" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.817522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.823811 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.826499 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.827931 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865386 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865413 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxvd\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-kube-api-access-tvxvd\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865589 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.865628 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.922524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.966963 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967021 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxvd\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-kube-api-access-tvxvd\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967076 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.967290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.970186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.970413 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.970476 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4f5a226-bdff-4182-971c-e3a22264a7d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.972609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.974487 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.976567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.978121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.988662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f5a226-bdff-4182-971c-e3a22264a7d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:15 crc kubenswrapper[4869]: I0202 15:24:15.992429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxvd\" (UniqueName: \"kubernetes.io/projected/e4f5a226-bdff-4182-971c-e3a22264a7d6-kube-api-access-tvxvd\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.004484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4f5a226-bdff-4182-971c-e3a22264a7d6\") " pod="openstack/glance-default-internal-api-0" Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.136382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.482048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerStarted","Data":"b1c4627ca0ca190d9e5b9123d862a6e8bc80353fedf05e6831015a4a4f791ce4"} Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.492054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6439a406-db54-421d-b5c7-5911b35cfda3","Type":"ContainerStarted","Data":"6234895e93703654cdba09b154044e2a9aadcb94c9519fae5cdcd0e6aae32ce1"} Feb 02 15:24:16 crc kubenswrapper[4869]: I0202 15:24:16.800040 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.476478 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2cc1fd-d5ce-45c5-a396-88cf344d5f22" path="/var/lib/kubelet/pods/8f2cc1fd-d5ce-45c5-a396-88cf344d5f22/volumes" Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.514664 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6439a406-db54-421d-b5c7-5911b35cfda3","Type":"ContainerStarted","Data":"aa24e45981cea8b1278e50a7fe709e50641dc1b8313f907be1b6ff84c40bfe67"} Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.514717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6439a406-db54-421d-b5c7-5911b35cfda3","Type":"ContainerStarted","Data":"e9b7c37f5dd6e0ffba322c14393a839bd9de8e92d96f01031a371abfff466c3f"} Feb 02 15:24:17 crc kubenswrapper[4869]: I0202 15:24:17.554616 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.554600956 podStartE2EDuration="3.554600956s" podCreationTimestamp="2026-02-02 15:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:17.534671628 +0000 UTC m=+3059.179308398" watchObservedRunningTime="2026-02-02 15:24:17.554600956 +0000 UTC m=+3059.199237726" Feb 02 15:24:19 crc kubenswrapper[4869]: I0202 15:24:19.342979 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 02 15:24:19 crc kubenswrapper[4869]: I0202 15:24:19.430590 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 02 15:24:19 crc kubenswrapper[4869]: I0202 15:24:19.602699 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 02 15:24:22 crc kubenswrapper[4869]: W0202 15:24:22.055647 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f5a226_bdff_4182_971c_e3a22264a7d6.slice/crio-725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238 WatchSource:0}: Error finding container 725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238: Status 404 returned error can't find the container with id 725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238 Feb 02 15:24:22 crc kubenswrapper[4869]: I0202 15:24:22.562100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4f5a226-bdff-4182-971c-e3a22264a7d6","Type":"ContainerStarted","Data":"725e7a1e5863de2cde112911f3c462b95e2f2b823c99f92501cd170329913238"} Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.875779 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.876395 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.979735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 15:24:24 crc kubenswrapper[4869]: I0202 15:24:24.987381 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.593345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4f5a226-bdff-4182-971c-e3a22264a7d6","Type":"ContainerStarted","Data":"f61d9fbf5f53654cab8f027de80001582ffe118f15af983135ef49928bf0260e"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.596110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerStarted","Data":"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.596155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerStarted","Data":"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.600872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bc7747c5b-j78w2" event={"ID":"8714c728-0089-451b-8335-ab32ef8c39ac","Type":"ContainerStarted","Data":"dbdb3ed5bc4906a409e00c9fb4f60c43ae1a1ef35da26139ad274f01a262a6a3"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.600926 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bc7747c5b-j78w2" event={"ID":"8714c728-0089-451b-8335-ab32ef8c39ac","Type":"ContainerStarted","Data":"56310bc7a94d5d1ce987814af1e280656dcc3680b558e4e3eb45fea86ee388fe"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.603952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerStarted","Data":"8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.603992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerStarted","Data":"e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.604108 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74c696d745-m9v9m" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" containerID="cri-o://e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.604729 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74c696d745-m9v9m" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" containerID="cri-o://8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.613178 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerStarted","Data":"5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.624633 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74748d768-vjhn2" podStartSLOduration=2.290035868 podStartE2EDuration="13.624613383s" podCreationTimestamp="2026-02-02 15:24:12 +0000 UTC" firstStartedPulling="2026-02-02 15:24:13.286773878 +0000 UTC m=+3054.931410648" lastFinishedPulling="2026-02-02 15:24:24.621351333 +0000 UTC m=+3066.265988163" observedRunningTime="2026-02-02 15:24:25.619800506 +0000 UTC m=+3067.264437316" watchObservedRunningTime="2026-02-02 15:24:25.624613383 +0000 UTC m=+3067.269250173" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626143 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d66c5779c-pggjz" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" containerID="cri-o://ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626343 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d66c5779c-pggjz" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" containerID="cri-o://790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7" gracePeriod=30 Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerStarted","Data":"790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626901 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerStarted","Data":"ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e"} Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.626993 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.627010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.645735 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-jf2x2" podStartSLOduration=2.9657750419999998 podStartE2EDuration="11.64571559s" podCreationTimestamp="2026-02-02 15:24:14 +0000 UTC" firstStartedPulling="2026-02-02 15:24:15.941340803 +0000 UTC m=+3057.585977573" lastFinishedPulling="2026-02-02 15:24:24.621281351 +0000 UTC m=+3066.265918121" observedRunningTime="2026-02-02 15:24:25.642105061 +0000 UTC m=+3067.286741851" watchObservedRunningTime="2026-02-02 15:24:25.64571559 +0000 UTC m=+3067.290352360" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.665006 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6bc7747c5b-j78w2" podStartSLOduration=2.329970607 podStartE2EDuration="13.664971151s" podCreationTimestamp="2026-02-02 15:24:12 +0000 UTC" firstStartedPulling="2026-02-02 15:24:13.286209875 +0000 UTC m=+3054.930846645" lastFinishedPulling="2026-02-02 15:24:24.621210419 +0000 UTC m=+3066.265847189" observedRunningTime="2026-02-02 15:24:25.659704912 +0000 UTC m=+3067.304341682" watchObservedRunningTime="2026-02-02 15:24:25.664971151 +0000 UTC m=+3067.309607941" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.684506 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-74c696d745-m9v9m" podStartSLOduration=2.8311714759999997 podStartE2EDuration="16.684483769s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="2026-02-02 15:24:10.762476303 +0000 UTC m=+3052.407113073" lastFinishedPulling="2026-02-02 15:24:24.615788596 +0000 UTC m=+3066.260425366" observedRunningTime="2026-02-02 15:24:25.680126602 +0000 UTC m=+3067.324763382" watchObservedRunningTime="2026-02-02 15:24:25.684483769 +0000 UTC m=+3067.329120539" Feb 02 15:24:25 crc kubenswrapper[4869]: I0202 15:24:25.710295 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d66c5779c-pggjz" podStartSLOduration=2.865635129 podStartE2EDuration="16.71027171s" podCreationTimestamp="2026-02-02 15:24:09 +0000 UTC" firstStartedPulling="2026-02-02 15:24:10.776989019 +0000 UTC m=+3052.421625789" lastFinishedPulling="2026-02-02 15:24:24.6216256 +0000 UTC m=+3066.266262370" observedRunningTime="2026-02-02 15:24:25.706365004 +0000 UTC m=+3067.351001774" watchObservedRunningTime="2026-02-02 15:24:25.71027171 +0000 UTC m=+3067.354908480" Feb 02 15:24:26 crc kubenswrapper[4869]: I0202 15:24:26.462270 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:26 crc kubenswrapper[4869]: E0202 15:24:26.462799 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:26 crc kubenswrapper[4869]: I0202 15:24:26.635421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4f5a226-bdff-4182-971c-e3a22264a7d6","Type":"ContainerStarted","Data":"632a991072605ccdb319651bb13ce3e2e907da3751ea2ca2a84d008da38a6a16"} Feb 02 15:24:26 crc kubenswrapper[4869]: I0202 15:24:26.659188 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.659169689 podStartE2EDuration="11.659169689s" podCreationTimestamp="2026-02-02 15:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:26.65511614 +0000 UTC m=+3068.299752910" watchObservedRunningTime="2026-02-02 15:24:26.659169689 +0000 UTC m=+3068.303806449" Feb 02 15:24:27 crc kubenswrapper[4869]: I0202 15:24:27.643216 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 15:24:27 crc kubenswrapper[4869]: I0202 15:24:27.643523 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.883683 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.935030 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.935152 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 15:24:29 crc kubenswrapper[4869]: I0202 15:24:29.937555 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 15:24:30 crc kubenswrapper[4869]: I0202 15:24:30.017372 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.558527 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.558879 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.690572 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:32 crc kubenswrapper[4869]: I0202 15:24:32.691514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.137461 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.137862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.184627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.202915 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.736477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:36 crc kubenswrapper[4869]: I0202 15:24:36.736518 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:37 crc kubenswrapper[4869]: I0202 15:24:37.462693 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:37 crc kubenswrapper[4869]: E0202 15:24:37.463264 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:38 crc kubenswrapper[4869]: I0202 15:24:38.700585 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:38 crc kubenswrapper[4869]: I0202 15:24:38.734143 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 15:24:40 crc kubenswrapper[4869]: I0202 15:24:40.803090 4869 generic.go:334] "Generic (PLEG): container finished" podID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerID="5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72" exitCode=0 Feb 02 15:24:40 crc kubenswrapper[4869]: I0202 15:24:40.803325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerDied","Data":"5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72"} Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.235188 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.268779 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.268837 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.268942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.269081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") pod \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\" (UID: \"d8b453d3-88d6-4fd5-bedc-62e0d4270f20\") " Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.274982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c" (OuterVolumeSpecName: "kube-api-access-j297c") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "kube-api-access-j297c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.279279 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.287125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data" (OuterVolumeSpecName: "config-data") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.297275 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8b453d3-88d6-4fd5-bedc-62e0d4270f20" (UID: "d8b453d3-88d6-4fd5-bedc-62e0d4270f20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371195 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j297c\" (UniqueName: \"kubernetes.io/projected/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-kube-api-access-j297c\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371228 4869 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371237 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.371248 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b453d3-88d6-4fd5-bedc-62e0d4270f20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.559770 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.692476 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bc7747c5b-j78w2" podUID="8714c728-0089-451b-8335-ab32ef8c39ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.248:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.248:8443: connect: connection refused" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.820863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jf2x2" event={"ID":"d8b453d3-88d6-4fd5-bedc-62e0d4270f20","Type":"ContainerDied","Data":"b1c4627ca0ca190d9e5b9123d862a6e8bc80353fedf05e6831015a4a4f791ce4"} Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.820902 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c4627ca0ca190d9e5b9123d862a6e8bc80353fedf05e6831015a4a4f791ce4" Feb 02 15:24:42 crc kubenswrapper[4869]: I0202 15:24:42.820971 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jf2x2" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.167889 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: E0202 15:24:43.168411 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerName="manila-db-sync" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.168435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerName="manila-db-sync" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.168709 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" containerName="manila-db-sync" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.169996 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173540 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173699 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173794 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.173860 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-gtk54" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189742 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189857 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189933 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.189971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.203189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.230238 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.231720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.236668 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290743 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290860 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.290919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291201 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291304 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291321 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291342 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291464 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.291502 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.294865 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.300403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.301238 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.313832 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.315220 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.329064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.329309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"manila-scheduler-0\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.387971 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5kt5g"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.410901 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.442929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5kt5g"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654fm\" (UniqueName: \"kubernetes.io/projected/2d493264-07c6-4809-9a3e-809e60997896-kube-api-access-654fm\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.449962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-config\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450013 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450162 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.450232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.451506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.451549 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.467872 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.471665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.474533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.479537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.480200 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.502424 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.504533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"manila-share-share1-0\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.538189 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.540190 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.546061 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654fm\" (UniqueName: \"kubernetes.io/projected/2d493264-07c6-4809-9a3e-809e60997896-kube-api-access-654fm\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-config\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552737 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552795 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.552826 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.553267 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.553949 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-config\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.559450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.563547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.563592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.563900 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.564415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d493264-07c6-4809-9a3e-809e60997896-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.575654 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654fm\" (UniqueName: \"kubernetes.io/projected/2d493264-07c6-4809-9a3e-809e60997896-kube-api-access-654fm\") pod \"dnsmasq-dns-69655fd4bf-5kt5g\" (UID: \"2d493264-07c6-4809-9a3e-809e60997896\") " pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.605338 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656530 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656801 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.656998 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.758688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.758969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759031 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759159 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759194 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.759707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.760697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.782466 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.796687 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.796730 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.796694 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:43 crc kubenswrapper[4869]: I0202 15:24:43.797247 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"manila-api-0\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " pod="openstack/manila-api-0" Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:43.932238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.434658 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.502169 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.700685 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5kt5g"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.833672 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.869831 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerStarted","Data":"3af6ab75a56f8bed06c1d0bc83b535b2352c23686aa45e49a7bac1b6f3b2b711"} Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.871377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerStarted","Data":"a5dd2b6085a889dc98e2fb099d3063bc3e713c383fe9013a6e33aac2e5968482"} Feb 02 15:24:45 crc kubenswrapper[4869]: I0202 15:24:45.872577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" event={"ID":"2d493264-07c6-4809-9a3e-809e60997896","Type":"ContainerStarted","Data":"0af54a4c5cfceed254885ffe8b56a8d2ad390290b0f4d7e1cc9abf8392e0cfd6"} Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.411391 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.888304 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerStarted","Data":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.888583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerStarted","Data":"b76b0402055bbe916acc9c514573c63133b5f78cbe7cb50685001cf6af0e5d07"} Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.892369 4869 generic.go:334] "Generic (PLEG): container finished" podID="2d493264-07c6-4809-9a3e-809e60997896" containerID="daf9bbfc3311debaff2b01a5093e0472118daf0097059296dbbc8754ec88d996" exitCode=0 Feb 02 15:24:46 crc kubenswrapper[4869]: I0202 15:24:46.892405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" event={"ID":"2d493264-07c6-4809-9a3e-809e60997896","Type":"ContainerDied","Data":"daf9bbfc3311debaff2b01a5093e0472118daf0097059296dbbc8754ec88d996"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.944151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" event={"ID":"2d493264-07c6-4809-9a3e-809e60997896","Type":"ContainerStarted","Data":"bae86ceaafa3eeec39dce3c0c4ccb28223cd4c297aed6a1d3741a7087742cdc9"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.946172 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968777 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerStarted","Data":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968822 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968834 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" containerID="cri-o://3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" gracePeriod=30 Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.968847 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" containerID="cri-o://276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" gracePeriod=30 Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.974715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerStarted","Data":"4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9"} Feb 02 15:24:47 crc kubenswrapper[4869]: I0202 15:24:47.974748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerStarted","Data":"dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.001268 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" podStartSLOduration=5.001249228 podStartE2EDuration="5.001249228s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:47.981374342 +0000 UTC m=+3089.626011112" watchObservedRunningTime="2026-02-02 15:24:48.001249228 +0000 UTC m=+3089.645885998" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.065609 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.293904732 podStartE2EDuration="5.065579503s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="2026-02-02 15:24:45.426757914 +0000 UTC m=+3087.071394684" lastFinishedPulling="2026-02-02 15:24:46.198432685 +0000 UTC m=+3087.843069455" observedRunningTime="2026-02-02 15:24:48.046272 +0000 UTC m=+3089.690908770" watchObservedRunningTime="2026-02-02 15:24:48.065579503 +0000 UTC m=+3089.710216283" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.088429 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.088411782 podStartE2EDuration="5.088411782s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:48.086460835 +0000 UTC m=+3089.731097605" watchObservedRunningTime="2026-02-02 15:24:48.088411782 +0000 UTC m=+3089.733048552" Feb 02 15:24:48 crc kubenswrapper[4869]: E0202 15:24:48.254098 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67738938_12ff_40e9_8c30_d0993939eafb.slice/crio-276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67738938_12ff_40e9_8c30_d0993939eafb.slice/crio-conmon-276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2.scope\": RecentStats: unable to find data in memory cache]" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.770092 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907213 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907373 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907443 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907554 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") pod \"67738938-12ff-40e9-8c30-d0993939eafb\" (UID: \"67738938-12ff-40e9-8c30-d0993939eafb\") " Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.907603 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.908008 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67738938-12ff-40e9-8c30-d0993939eafb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.909727 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs" (OuterVolumeSpecName: "logs") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.916211 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts" (OuterVolumeSpecName: "scripts") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.921075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb" (OuterVolumeSpecName: "kube-api-access-jxxdb") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "kube-api-access-jxxdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.930308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.972886 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data" (OuterVolumeSpecName: "config-data") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990789 4869 generic.go:334] "Generic (PLEG): container finished" podID="67738938-12ff-40e9-8c30-d0993939eafb" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" exitCode=0 Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990822 4869 generic.go:334] "Generic (PLEG): container finished" podID="67738938-12ff-40e9-8c30-d0993939eafb" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" exitCode=143 Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990845 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerDied","Data":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990887 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerDied","Data":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.990990 4869 scope.go:117] "RemoveContainer" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.991087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"67738938-12ff-40e9-8c30-d0993939eafb","Type":"ContainerDied","Data":"b76b0402055bbe916acc9c514573c63133b5f78cbe7cb50685001cf6af0e5d07"} Feb 02 15:24:48 crc kubenswrapper[4869]: I0202 15:24:48.995089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67738938-12ff-40e9-8c30-d0993939eafb" (UID: "67738938-12ff-40e9-8c30-d0993939eafb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009884 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67738938-12ff-40e9-8c30-d0993939eafb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009935 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009947 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxxdb\" (UniqueName: \"kubernetes.io/projected/67738938-12ff-40e9-8c30-d0993939eafb-kube-api-access-jxxdb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009959 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009968 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.009976 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67738938-12ff-40e9-8c30-d0993939eafb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.099743 4869 scope.go:117] "RemoveContainer" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.126603 4869 scope.go:117] "RemoveContainer" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.127163 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": container with ID starting with 3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a not found: ID does not exist" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.127211 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} err="failed to get container status \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": rpc error: code = NotFound desc = could not find container \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": container with ID starting with 3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.127238 4869 scope.go:117] "RemoveContainer" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.130273 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": container with ID starting with 276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2 not found: ID does not exist" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130318 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} err="failed to get container status \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": rpc error: code = NotFound desc = could not find container \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": container with ID starting with 276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2 not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130345 4869 scope.go:117] "RemoveContainer" containerID="3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130871 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a"} err="failed to get container status \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": rpc error: code = NotFound desc = could not find container \"3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a\": container with ID starting with 3b748be4ce578468bd8ad7463f67549932eccef27b9f3bdc4b9aef811c46cc9a not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.130953 4869 scope.go:117] "RemoveContainer" containerID="276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.132262 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2"} err="failed to get container status \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": rpc error: code = NotFound desc = could not find container \"276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2\": container with ID starting with 276aee05fd774e1a5b6aeaf6780718146f45524ca79e9702a152f87cbb78bdc2 not found: ID does not exist" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.340185 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.351290 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.363935 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.364454 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364469 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.364494 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364502 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364750 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api-log" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.364774 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="67738938-12ff-40e9-8c30-d0993939eafb" containerName="manila-api" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.366086 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.375131 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.382524 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.382658 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.382728 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.469424 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:24:49 crc kubenswrapper[4869]: E0202 15:24:49.469702 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.510310 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67738938-12ff-40e9-8c30-d0993939eafb" path="/var/lib/kubelet/pods/67738938-12ff-40e9-8c30-d0993939eafb/volumes" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527454 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrnc\" (UniqueName: \"kubernetes.io/projected/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-kube-api-access-kxrnc\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527672 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data-custom\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527744 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-etc-machine-id\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527861 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-public-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.527957 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-scripts\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.528047 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-logs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.528138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.629660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.630734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data-custom\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.630841 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-etc-machine-id\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.630904 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-etc-machine-id\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-public-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-scripts\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-logs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631400 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.631596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrnc\" (UniqueName: \"kubernetes.io/projected/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-kube-api-access-kxrnc\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.634841 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-public-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.634959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data-custom\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.635195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-logs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.635679 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.636963 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-scripts\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.637641 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-config-data\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.651152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-internal-tls-certs\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.651503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrnc\" (UniqueName: \"kubernetes.io/projected/68d3a7fe-1a89-4d45-9ffd-8057e313d3e9-kube-api-access-kxrnc\") pod \"manila-api-0\" (UID: \"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9\") " pod="openstack/manila-api-0" Feb 02 15:24:49 crc kubenswrapper[4869]: I0202 15:24:49.701143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 02 15:24:50 crc kubenswrapper[4869]: I0202 15:24:50.303648 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 02 15:24:50 crc kubenswrapper[4869]: W0202 15:24:50.305520 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68d3a7fe_1a89_4d45_9ffd_8057e313d3e9.slice/crio-834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6 WatchSource:0}: Error finding container 834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6: Status 404 returned error can't find the container with id 834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6 Feb 02 15:24:51 crc kubenswrapper[4869]: I0202 15:24:51.027772 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9","Type":"ContainerStarted","Data":"e8f31c4603a24ff86c886e9397b2233da011fffea9ade621ff2084364663d387"} Feb 02 15:24:51 crc kubenswrapper[4869]: I0202 15:24:51.028145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9","Type":"ContainerStarted","Data":"834077631b4985d984f999ccd80ba4929d43543f375251b4b743e016ccc2f1a6"} Feb 02 15:24:52 crc kubenswrapper[4869]: I0202 15:24:52.050752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"68d3a7fe-1a89-4d45-9ffd-8057e313d3e9","Type":"ContainerStarted","Data":"6da282bcfb7b5348e18133c3cc81a9ecd307f63f23f02853a371f454c1dc053b"} Feb 02 15:24:52 crc kubenswrapper[4869]: I0202 15:24:52.051066 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 02 15:24:52 crc kubenswrapper[4869]: I0202 15:24:52.099531 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.099512555 podStartE2EDuration="3.099512555s" podCreationTimestamp="2026-02-02 15:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:24:52.088033694 +0000 UTC m=+3093.732670464" watchObservedRunningTime="2026-02-02 15:24:52.099512555 +0000 UTC m=+3093.744149325" Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.503823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.607100 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-5kt5g" Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.699279 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 15:24:53 crc kubenswrapper[4869]: I0202 15:24:53.699981 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" containerID="cri-o://f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8" gracePeriod=10 Feb 02 15:24:54 crc kubenswrapper[4869]: I0202 15:24:54.077268 4869 generic.go:334] "Generic (PLEG): container finished" podID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerID="f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8" exitCode=0 Feb 02 15:24:54 crc kubenswrapper[4869]: I0202 15:24:54.077320 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerDied","Data":"f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8"} Feb 02 15:24:54 crc kubenswrapper[4869]: I0202 15:24:54.947371 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078346 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078396 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078744 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078771 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.078859 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") pod \"886da892-6808-4ff8-8fa4-48ad9cd65843\" (UID: \"886da892-6808-4ff8-8fa4-48ad9cd65843\") " Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.088104 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj" (OuterVolumeSpecName: "kube-api-access-898pj") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "kube-api-access-898pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.115306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" event={"ID":"886da892-6808-4ff8-8fa4-48ad9cd65843","Type":"ContainerDied","Data":"f5011defbedf57db3a35f576f2d27acfa80a3d8cea8c46fb6b519d638e8c4f12"} Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.115507 4869 scope.go:117] "RemoveContainer" containerID="f2b09b285d84f4c08e8f09c1912b0fe16978549e7312fda228ce84d0b3c9dbe8" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.115631 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zltx5" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.133494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.142023 4869 scope.go:117] "RemoveContainer" containerID="267d2b5ca4d238e5b769ca48e7a762954290c341c2ea35ac8b67c09d6240f345" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.171703 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.182295 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-898pj\" (UniqueName: \"kubernetes.io/projected/886da892-6808-4ff8-8fa4-48ad9cd65843-kube-api-access-898pj\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.182329 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.182341 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.186954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config" (OuterVolumeSpecName: "config") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.188864 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.200630 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "886da892-6808-4ff8-8fa4-48ad9cd65843" (UID: "886da892-6808-4ff8-8fa4-48ad9cd65843"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.219011 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.253123 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.284134 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.284167 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-config\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.284180 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/886da892-6808-4ff8-8fa4-48ad9cd65843-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.478209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 15:24:55 crc kubenswrapper[4869]: I0202 15:24:55.478406 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zltx5"] Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153251 4869 generic.go:334] "Generic (PLEG): container finished" podID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerID="8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153705 4869 generic.go:334] "Generic (PLEG): container finished" podID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerID="e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153745 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerDied","Data":"8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.153770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerDied","Data":"e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.157158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerStarted","Data":"cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.157181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerStarted","Data":"bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160510 4869 generic.go:334] "Generic (PLEG): container finished" podID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerID="790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160526 4869 generic.go:334] "Generic (PLEG): container finished" podID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerID="ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e" exitCode=137 Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerDied","Data":"790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.160559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerDied","Data":"ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e"} Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.251636 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.286646 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.287711 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.272438327 podStartE2EDuration="13.287694232s" podCreationTimestamp="2026-02-02 15:24:43 +0000 UTC" firstStartedPulling="2026-02-02 15:24:45.52995468 +0000 UTC m=+3087.174591440" lastFinishedPulling="2026-02-02 15:24:54.545210575 +0000 UTC m=+3096.189847345" observedRunningTime="2026-02-02 15:24:56.184096896 +0000 UTC m=+3097.828733676" watchObservedRunningTime="2026-02-02 15:24:56.287694232 +0000 UTC m=+3097.932330992" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.322812 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323446 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") pod \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\" (UID: \"c9b2c09c-26a4-44f4-8dad-d90ef99b6972\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.323391 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs" (OuterVolumeSpecName: "logs") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.324183 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.330116 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp" (OuterVolumeSpecName: "kube-api-access-9mtmp") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "kube-api-access-9mtmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.361057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.368951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts" (OuterVolumeSpecName: "scripts") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.413446 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data" (OuterVolumeSpecName: "config-data") pod "c9b2c09c-26a4-44f4-8dad-d90ef99b6972" (UID: "c9b2c09c-26a4-44f4-8dad-d90ef99b6972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433764 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433832 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433857 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.433969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") pod \"f3598164-68b7-40fe-91ce-d4cf2fa64757\" (UID: \"f3598164-68b7-40fe-91ce-d4cf2fa64757\") " Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439177 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs" (OuterVolumeSpecName: "logs") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439614 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439634 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mtmp\" (UniqueName: \"kubernetes.io/projected/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-kube-api-access-9mtmp\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439644 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439652 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c9b2c09c-26a4-44f4-8dad-d90ef99b6972-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.439661 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3598164-68b7-40fe-91ce-d4cf2fa64757-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.460615 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8" (OuterVolumeSpecName: "kube-api-access-6s2c8") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "kube-api-access-6s2c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.497589 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.526175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts" (OuterVolumeSpecName: "scripts") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.547724 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data" (OuterVolumeSpecName: "config-data") pod "f3598164-68b7-40fe-91ce-d4cf2fa64757" (UID: "f3598164-68b7-40fe-91ce-d4cf2fa64757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548386 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s2c8\" (UniqueName: \"kubernetes.io/projected/f3598164-68b7-40fe-91ce-d4cf2fa64757-kube-api-access-6s2c8\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548406 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548415 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3598164-68b7-40fe-91ce-d4cf2fa64757-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:56 crc kubenswrapper[4869]: I0202 15:24:56.548423 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3598164-68b7-40fe-91ce-d4cf2fa64757-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.173197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74c696d745-m9v9m" event={"ID":"c9b2c09c-26a4-44f4-8dad-d90ef99b6972","Type":"ContainerDied","Data":"2db55e6d04f2819c1e06bcde8e721cfa825f9601f520cf4e3f6565c2aaa1d4aa"} Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.174652 4869 scope.go:117] "RemoveContainer" containerID="8751214b5139e4ac75f9b5d2d52d8b692c58d67a63992a6d43d5bceb415c5aba" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.173229 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74c696d745-m9v9m" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.179326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d66c5779c-pggjz" event={"ID":"f3598164-68b7-40fe-91ce-d4cf2fa64757","Type":"ContainerDied","Data":"1e3835ffee852cf7e2e461dbfd0c1bce873454f7dd01eb6e5bb8f0bd42308327"} Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.179368 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d66c5779c-pggjz" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.235060 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.251878 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74c696d745-m9v9m"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.267565 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.279242 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d66c5779c-pggjz"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.384195 4869 scope.go:117] "RemoveContainer" containerID="e6c42d1d0a06ce880033dfe44f2231d6e878da79d357eb393123a8fa0c9822db" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.409747 4869 scope.go:117] "RemoveContainer" containerID="790fee177bba673525c12d16f6edefedd6ca7806822ebda37546c5117d4405d7" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.519611 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" path="/var/lib/kubelet/pods/886da892-6808-4ff8-8fa4-48ad9cd65843/volumes" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.520580 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" path="/var/lib/kubelet/pods/c9b2c09c-26a4-44f4-8dad-d90ef99b6972/volumes" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.521772 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" path="/var/lib/kubelet/pods/f3598164-68b7-40fe-91ce-d4cf2fa64757/volumes" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.631881 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6bc7747c5b-j78w2" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.651975 4869 scope.go:117] "RemoveContainer" containerID="ed41aa78d149b0d7870f3a82d39b354f75e6364558900ff4d2ddfcb5f19dfb8e" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.744768 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.758472 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" containerID="cri-o://9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.759140 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" containerID="cri-o://1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.774893 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.781901 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782253 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" containerID="cri-o://f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782428 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" containerID="cri-o://75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782488 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" containerID="cri-o://cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" gracePeriod=30 Feb 02 15:24:57 crc kubenswrapper[4869]: I0202 15:24:57.782537 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" containerID="cri-o://0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" gracePeriod=30 Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.224170 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" exitCode=0 Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.225430 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" exitCode=2 Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.224366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e"} Feb 02 15:24:58 crc kubenswrapper[4869]: I0202 15:24:58.225554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4"} Feb 02 15:24:59 crc kubenswrapper[4869]: I0202 15:24:59.241692 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" exitCode=0 Feb 02 15:24:59 crc kubenswrapper[4869]: I0202 15:24:59.241773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2"} Feb 02 15:25:00 crc kubenswrapper[4869]: I0202 15:25:00.941036 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:37516->10.217.0.247:8443: read: connection reset by peer" Feb 02 15:25:01 crc kubenswrapper[4869]: I0202 15:25:01.152649 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.188:3000/\": dial tcp 10.217.0.188:3000: connect: connection refused" Feb 02 15:25:02 crc kubenswrapper[4869]: I0202 15:25:02.559209 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:25:03 crc kubenswrapper[4869]: I0202 15:25:03.276453 4869 generic.go:334] "Generic (PLEG): container finished" podID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" exitCode=0 Feb 02 15:25:03 crc kubenswrapper[4869]: I0202 15:25:03.276496 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerDied","Data":"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597"} Feb 02 15:25:03 crc kubenswrapper[4869]: I0202 15:25:03.563884 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.463393 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:04 crc kubenswrapper[4869]: E0202 15:25:04.464266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.769275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858207 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.858611 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") pod \"d49257d3-a8ff-4242-b438-86da53133fb3\" (UID: \"d49257d3-a8ff-4242-b438-86da53133fb3\") " Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.859289 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.859574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.864087 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts" (OuterVolumeSpecName: "scripts") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.885221 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.892208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669" (OuterVolumeSpecName: "kube-api-access-86669") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "kube-api-access-86669". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.947570 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961494 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961520 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961826 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961965 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86669\" (UniqueName: \"kubernetes.io/projected/d49257d3-a8ff-4242-b438-86da53133fb3-kube-api-access-86669\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961984 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.961993 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49257d3-a8ff-4242-b438-86da53133fb3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.972588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data" (OuterVolumeSpecName: "config-data") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:04 crc kubenswrapper[4869]: I0202 15:25:04.985626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d49257d3-a8ff-4242-b438-86da53133fb3" (UID: "d49257d3-a8ff-4242-b438-86da53133fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.063991 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.064034 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49257d3-a8ff-4242-b438-86da53133fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.210191 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.259852 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297090 4869 generic.go:334] "Generic (PLEG): container finished" podID="d49257d3-a8ff-4242-b438-86da53133fb3" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" exitCode=0 Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297142 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc"} Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49257d3-a8ff-4242-b438-86da53133fb3","Type":"ContainerDied","Data":"0796932bd84ec076e7335a7406319502760ed8351d5e889f11c65dc928821a28"} Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.297280 4869 scope.go:117] "RemoveContainer" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.298011 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" containerID="cri-o://4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9" gracePeriod=30 Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.298114 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" containerID="cri-o://dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41" gracePeriod=30 Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.337828 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.345399 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.352808 4869 scope.go:117] "RemoveContainer" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364148 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364526 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364543 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364567 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364573 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364586 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364592 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364605 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364610 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364622 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364628 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364638 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364644 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364654 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364661 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364675 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364681 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364692 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364698 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.364710 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="init" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364717 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="init" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364870 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364883 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="886da892-6808-4ff8-8fa4-48ad9cd65843" containerName="dnsmasq-dns" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364898 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="sg-core" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364927 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="proxy-httpd" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364935 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364948 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3598164-68b7-40fe-91ce-d4cf2fa64757" containerName="horizon-log" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364961 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-central-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364976 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" containerName="ceilometer-notification-agent" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.364990 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b2c09c-26a4-44f4-8dad-d90ef99b6972" containerName="horizon" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.366593 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.372544 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.372673 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.372759 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.378993 4869 scope.go:117] "RemoveContainer" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.386159 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.436437 4869 scope.go:117] "RemoveContainer" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2c9\" (UniqueName: \"kubernetes.io/projected/58069dba-f825-4ee3-972d-85d122369b28-kube-api-access-wt2c9\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470264 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-log-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-scripts\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-run-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.470608 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-config-data\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.476375 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49257d3-a8ff-4242-b438-86da53133fb3" path="/var/lib/kubelet/pods/d49257d3-a8ff-4242-b438-86da53133fb3/volumes" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.509128 4869 scope.go:117] "RemoveContainer" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.509601 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e\": container with ID starting with 75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e not found: ID does not exist" containerID="75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.509643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e"} err="failed to get container status \"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e\": rpc error: code = NotFound desc = could not find container \"75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e\": container with ID starting with 75cd715d5761b578078dd2cfbb21c7c1f1ed7dc2f9b040afad54f06003328e4e not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.509686 4869 scope.go:117] "RemoveContainer" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.510168 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4\": container with ID starting with cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4 not found: ID does not exist" containerID="cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510196 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4"} err="failed to get container status \"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4\": rpc error: code = NotFound desc = could not find container \"cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4\": container with ID starting with cca6a28ff2cd55859fb337843e2e2a4e9e2852dfbf0c0ae0414cd6a7230124c4 not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510215 4869 scope.go:117] "RemoveContainer" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.510456 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc\": container with ID starting with 0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc not found: ID does not exist" containerID="0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510494 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc"} err="failed to get container status \"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc\": rpc error: code = NotFound desc = could not find container \"0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc\": container with ID starting with 0108d5b3fe1dc370e8ac622e2be298fff35bfacdedbf553db3c4fe5eeee1bbcc not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510507 4869 scope.go:117] "RemoveContainer" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" Feb 02 15:25:05 crc kubenswrapper[4869]: E0202 15:25:05.510697 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2\": container with ID starting with f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2 not found: ID does not exist" containerID="f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.510721 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2"} err="failed to get container status \"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2\": rpc error: code = NotFound desc = could not find container \"f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2\": container with ID starting with f72404fc6e43589e6a07d71bd41467f5c883fa86a37e263f3e7b47764cd36cb2 not found: ID does not exist" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-scripts\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-run-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-config-data\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2c9\" (UniqueName: \"kubernetes.io/projected/58069dba-f825-4ee3-972d-85d122369b28-kube-api-access-wt2c9\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572847 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.572948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-log-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.573852 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-run-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.574459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58069dba-f825-4ee3-972d-85d122369b28-log-httpd\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.578125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.578398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.578483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-scripts\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.579505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-config-data\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.580438 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58069dba-f825-4ee3-972d-85d122369b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.595882 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2c9\" (UniqueName: \"kubernetes.io/projected/58069dba-f825-4ee3-972d-85d122369b28-kube-api-access-wt2c9\") pod \"ceilometer-0\" (UID: \"58069dba-f825-4ee3-972d-85d122369b28\") " pod="openstack/ceilometer-0" Feb 02 15:25:05 crc kubenswrapper[4869]: I0202 15:25:05.731137 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 15:25:06 crc kubenswrapper[4869]: I0202 15:25:06.310140 4869 generic.go:334] "Generic (PLEG): container finished" podID="2097f350-00d8-4077-8864-1e2f78ab718f" containerID="dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41" exitCode=0 Feb 02 15:25:06 crc kubenswrapper[4869]: I0202 15:25:06.310457 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerDied","Data":"dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41"} Feb 02 15:25:06 crc kubenswrapper[4869]: I0202 15:25:06.332179 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 15:25:06 crc kubenswrapper[4869]: W0202 15:25:06.332288 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58069dba_f825_4ee3_972d_85d122369b28.slice/crio-99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d WatchSource:0}: Error finding container 99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d: Status 404 returned error can't find the container with id 99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.321998 4869 generic.go:334] "Generic (PLEG): container finished" podID="2097f350-00d8-4077-8864-1e2f78ab718f" containerID="4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9" exitCode=0 Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.322046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerDied","Data":"4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9"} Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.324166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"99e8cda2916ba3256f526f9d400e56bf0ae9d1da2c11495bca8664e40405698d"} Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.426100 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518411 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518514 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518532 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.518617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") pod \"2097f350-00d8-4077-8864-1e2f78ab718f\" (UID: \"2097f350-00d8-4077-8864-1e2f78ab718f\") " Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.521970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.527644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.529290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr" (OuterVolumeSpecName: "kube-api-access-hr5cr") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "kube-api-access-hr5cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.529585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts" (OuterVolumeSpecName: "scripts") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.595134 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621775 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621820 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621832 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr5cr\" (UniqueName: \"kubernetes.io/projected/2097f350-00d8-4077-8864-1e2f78ab718f-kube-api-access-hr5cr\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621846 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2097f350-00d8-4077-8864-1e2f78ab718f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.621858 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.657093 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data" (OuterVolumeSpecName: "config-data") pod "2097f350-00d8-4077-8864-1e2f78ab718f" (UID: "2097f350-00d8-4077-8864-1e2f78ab718f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:07 crc kubenswrapper[4869]: I0202 15:25:07.724038 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2097f350-00d8-4077-8864-1e2f78ab718f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.334133 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.334103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"2097f350-00d8-4077-8864-1e2f78ab718f","Type":"ContainerDied","Data":"3af6ab75a56f8bed06c1d0bc83b535b2352c23686aa45e49a7bac1b6f3b2b711"} Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.334522 4869 scope.go:117] "RemoveContainer" containerID="dd2910f485a434b9bdef89a5506ef76fa03acd1bcf36d6644fc591226fcc5a41" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.339505 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"9ef443639735948af5ed4209c954021920832fd3127c205665051bb01b617b44"} Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.352972 4869 scope.go:117] "RemoveContainer" containerID="4dcefdd74941f61ca46fb94962a4b48a09ab902c791403326a6a64e8f9120da9" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.392332 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.415056 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.434378 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: E0202 15:25:08.434792 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.434804 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" Feb 02 15:25:08 crc kubenswrapper[4869]: E0202 15:25:08.434828 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.434835 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.435029 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="probe" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.435047 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" containerName="manila-scheduler" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.436055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.438827 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.446026 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.538919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52b1f1d7-270e-400d-b273-961b7142f38c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htgb4\" (UniqueName: \"kubernetes.io/projected/52b1f1d7-270e-400d-b273-961b7142f38c-kube-api-access-htgb4\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-scripts\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539749 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.539967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641331 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-scripts\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641382 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52b1f1d7-270e-400d-b273-961b7142f38c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.641669 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htgb4\" (UniqueName: \"kubernetes.io/projected/52b1f1d7-270e-400d-b273-961b7142f38c-kube-api-access-htgb4\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.642962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52b1f1d7-270e-400d-b273-961b7142f38c-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.651456 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.651573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.653158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-scripts\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.665621 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b1f1d7-270e-400d-b273-961b7142f38c-config-data\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.670389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htgb4\" (UniqueName: \"kubernetes.io/projected/52b1f1d7-270e-400d-b273-961b7142f38c-kube-api-access-htgb4\") pod \"manila-scheduler-0\" (UID: \"52b1f1d7-270e-400d-b273-961b7142f38c\") " pod="openstack/manila-scheduler-0" Feb 02 15:25:08 crc kubenswrapper[4869]: I0202 15:25:08.761454 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 02 15:25:09 crc kubenswrapper[4869]: I0202 15:25:09.341827 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 02 15:25:09 crc kubenswrapper[4869]: W0202 15:25:09.342954 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b1f1d7_270e_400d_b273_961b7142f38c.slice/crio-e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470 WatchSource:0}: Error finding container e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470: Status 404 returned error can't find the container with id e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470 Feb 02 15:25:09 crc kubenswrapper[4869]: I0202 15:25:09.355381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"d833d53a42063ffd7fc9f6f65a65ecbac948ef1dd2edc5a0153ea7eda2c4d438"} Feb 02 15:25:09 crc kubenswrapper[4869]: I0202 15:25:09.478729 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2097f350-00d8-4077-8864-1e2f78ab718f" path="/var/lib/kubelet/pods/2097f350-00d8-4077-8864-1e2f78ab718f/volumes" Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.371558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"52b1f1d7-270e-400d-b273-961b7142f38c","Type":"ContainerStarted","Data":"e15924b76dbfbb1cb39b23c02385461dc684e01ac7ea39a9c16c3e9818b7ac64"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.372025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"52b1f1d7-270e-400d-b273-961b7142f38c","Type":"ContainerStarted","Data":"381bf148d3237cc515b70963309987e8108c879b6a6e8c7ebda985c69ada727d"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.372051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"52b1f1d7-270e-400d-b273-961b7142f38c","Type":"ContainerStarted","Data":"e0ee8bf6c2bf85c265c91460c3e6e5adf49c1dc4555bff0571d40fa712181470"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.377257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"df9be49cca42f67c993f5977d2c900cbf370a6ee3f97d5d5a2ab900622320942"} Feb 02 15:25:10 crc kubenswrapper[4869]: I0202 15:25:10.407802 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.407783265 podStartE2EDuration="2.407783265s" podCreationTimestamp="2026-02-02 15:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:25:10.407409455 +0000 UTC m=+3112.052046235" watchObservedRunningTime="2026-02-02 15:25:10.407783265 +0000 UTC m=+3112.052420035" Feb 02 15:25:11 crc kubenswrapper[4869]: I0202 15:25:11.374648 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 02 15:25:12 crc kubenswrapper[4869]: I0202 15:25:12.558873 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:25:13 crc kubenswrapper[4869]: I0202 15:25:13.403816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58069dba-f825-4ee3-972d-85d122369b28","Type":"ContainerStarted","Data":"ef265b77af52b5aeb03e2bd865dc5c9227c8ce7fb2220f6719b6094699495227"} Feb 02 15:25:13 crc kubenswrapper[4869]: I0202 15:25:13.404660 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.150516 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.170026 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.041207331 podStartE2EDuration="10.170009516s" podCreationTimestamp="2026-02-02 15:25:05 +0000 UTC" firstStartedPulling="2026-02-02 15:25:06.334245824 +0000 UTC m=+3107.978882594" lastFinishedPulling="2026-02-02 15:25:12.463048009 +0000 UTC m=+3114.107684779" observedRunningTime="2026-02-02 15:25:13.44636267 +0000 UTC m=+3115.090999440" watchObservedRunningTime="2026-02-02 15:25:15.170009516 +0000 UTC m=+3116.814646286" Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.202548 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.420390 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" containerID="cri-o://bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf" gracePeriod=30 Feb 02 15:25:15 crc kubenswrapper[4869]: I0202 15:25:15.420955 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" containerID="cri-o://cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb" gracePeriod=30 Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432357 4869 generic.go:334] "Generic (PLEG): container finished" podID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerID="cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb" exitCode=0 Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432665 4869 generic.go:334] "Generic (PLEG): container finished" podID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerID="bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf" exitCode=1 Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerDied","Data":"cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb"} Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.432718 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerDied","Data":"bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf"} Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.759036 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915710 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.915891 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") pod \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\" (UID: \"42c96e15-1507-4cd1-a8b6-382d40ff13d9\") " Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.916812 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.917811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.923948 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts" (OuterVolumeSpecName: "scripts") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.933326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.934174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph" (OuterVolumeSpecName: "ceph") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.943726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg" (OuterVolumeSpecName: "kube-api-access-l5ksg") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "kube-api-access-l5ksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:16 crc kubenswrapper[4869]: I0202 15:25:16.992954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018352 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-ceph\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018397 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018412 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018426 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018437 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018448 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5ksg\" (UniqueName: \"kubernetes.io/projected/42c96e15-1507-4cd1-a8b6-382d40ff13d9-kube-api-access-l5ksg\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.018461 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/42c96e15-1507-4cd1-a8b6-382d40ff13d9-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.043762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data" (OuterVolumeSpecName: "config-data") pod "42c96e15-1507-4cd1-a8b6-382d40ff13d9" (UID: "42c96e15-1507-4cd1-a8b6-382d40ff13d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.121017 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c96e15-1507-4cd1-a8b6-382d40ff13d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.442098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"42c96e15-1507-4cd1-a8b6-382d40ff13d9","Type":"ContainerDied","Data":"a5dd2b6085a889dc98e2fb099d3063bc3e713c383fe9013a6e33aac2e5968482"} Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.442153 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.442164 4869 scope.go:117] "RemoveContainer" containerID="cb6f9000331dd35d6cfccdc8797b81868e8d3390beb062ca9a1126c019ce19eb" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.463340 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:17 crc kubenswrapper[4869]: E0202 15:25:17.463688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.464834 4869 scope.go:117] "RemoveContainer" containerID="bb332499378c20fbcdea576d6085090e51dea61cf9ecb51f6ab2fb709a9451cf" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.484870 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.495026 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.512799 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: E0202 15:25:17.514224 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" Feb 02 15:25:17 crc kubenswrapper[4869]: E0202 15:25:17.514272 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514281 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514467 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="probe" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.514490 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" containerName="manila-share" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.515527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.523178 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.525894 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.632835 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-ceph\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.632929 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.632962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633005 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-scripts\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633310 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8t6k\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-kube-api-access-t8t6k\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.633525 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-scripts\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8t6k\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-kube-api-access-t8t6k\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735763 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735822 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-ceph\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.735982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.736105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.736105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0df9e23b-1681-42de-b9d6-87c4c518d082-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.740731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-scripts\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.741496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.741687 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.741902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-ceph\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.751835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df9e23b-1681-42de-b9d6-87c4c518d082-config-data\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.756210 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8t6k\" (UniqueName: \"kubernetes.io/projected/0df9e23b-1681-42de-b9d6-87c4c518d082-kube-api-access-t8t6k\") pod \"manila-share-share1-0\" (UID: \"0df9e23b-1681-42de-b9d6-87c4c518d082\") " pod="openstack/manila-share-share1-0" Feb 02 15:25:17 crc kubenswrapper[4869]: I0202 15:25:17.855189 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 02 15:25:18 crc kubenswrapper[4869]: I0202 15:25:18.425498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 02 15:25:18 crc kubenswrapper[4869]: I0202 15:25:18.457604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0df9e23b-1681-42de-b9d6-87c4c518d082","Type":"ContainerStarted","Data":"9c76628f582e0f3062c27e386e49bf7e716e644be538157f8c87366563b87726"} Feb 02 15:25:18 crc kubenswrapper[4869]: I0202 15:25:18.762869 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.477569 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c96e15-1507-4cd1-a8b6-382d40ff13d9" path="/var/lib/kubelet/pods/42c96e15-1507-4cd1-a8b6-382d40ff13d9/volumes" Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.479803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0df9e23b-1681-42de-b9d6-87c4c518d082","Type":"ContainerStarted","Data":"e193a6ce6ea41c820ae9cf91823554297174f7b60f5cd098b687c4412bf810f5"} Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.479836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0df9e23b-1681-42de-b9d6-87c4c518d082","Type":"ContainerStarted","Data":"a1f1480611f391c486bf2a8158a08c804cdc90d6393e3e92236f41953713aa73"} Feb 02 15:25:19 crc kubenswrapper[4869]: I0202 15:25:19.521685 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.521670475 podStartE2EDuration="2.521670475s" podCreationTimestamp="2026-02-02 15:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:25:19.517118934 +0000 UTC m=+3121.161755704" watchObservedRunningTime="2026-02-02 15:25:19.521670475 +0000 UTC m=+3121.166307245" Feb 02 15:25:22 crc kubenswrapper[4869]: I0202 15:25:22.558905 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-74748d768-vjhn2" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.247:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.247:8443: connect: connection refused" Feb 02 15:25:27 crc kubenswrapper[4869]: I0202 15:25:27.856101 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.165556 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247492 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247619 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247719 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247781 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247879 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.247996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.248053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") pod \"74249215-4cd6-45b3-b2ab-6aa245e963f2\" (UID: \"74249215-4cd6-45b3-b2ab-6aa245e963f2\") " Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.249120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs" (OuterVolumeSpecName: "logs") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.254185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg" (OuterVolumeSpecName: "kube-api-access-vtscg") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "kube-api-access-vtscg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.254366 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.273726 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts" (OuterVolumeSpecName: "scripts") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.281599 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data" (OuterVolumeSpecName: "config-data") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.285272 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.309181 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "74249215-4cd6-45b3-b2ab-6aa245e963f2" (UID: "74249215-4cd6-45b3-b2ab-6aa245e963f2"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351220 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351259 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74249215-4cd6-45b3-b2ab-6aa245e963f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351272 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74249215-4cd6-45b3-b2ab-6aa245e963f2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351282 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351298 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351310 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtscg\" (UniqueName: \"kubernetes.io/projected/74249215-4cd6-45b3-b2ab-6aa245e963f2-kube-api-access-vtscg\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.351320 4869 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/74249215-4cd6-45b3-b2ab-6aa245e963f2-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583337 4869 generic.go:334] "Generic (PLEG): container finished" podID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" exitCode=137 Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerDied","Data":"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813"} Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583454 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-74748d768-vjhn2" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583481 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-74748d768-vjhn2" event={"ID":"74249215-4cd6-45b3-b2ab-6aa245e963f2","Type":"ContainerDied","Data":"bb317fe37d1fca98ae0b5bc915c94ff30a5b109bb554ebf2814b1106d864e8a6"} Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.583512 4869 scope.go:117] "RemoveContainer" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.636509 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.647745 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-74748d768-vjhn2"] Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.784765 4869 scope.go:117] "RemoveContainer" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.809724 4869 scope.go:117] "RemoveContainer" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" Feb 02 15:25:28 crc kubenswrapper[4869]: E0202 15:25:28.810334 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597\": container with ID starting with 1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597 not found: ID does not exist" containerID="1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.810400 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597"} err="failed to get container status \"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597\": rpc error: code = NotFound desc = could not find container \"1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597\": container with ID starting with 1efbf2e95d3dc549824daefaa65264f5ebe9de2a8b49e7479238cbdd16bbd597 not found: ID does not exist" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.810440 4869 scope.go:117] "RemoveContainer" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" Feb 02 15:25:28 crc kubenswrapper[4869]: E0202 15:25:28.810959 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813\": container with ID starting with 9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813 not found: ID does not exist" containerID="9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813" Feb 02 15:25:28 crc kubenswrapper[4869]: I0202 15:25:28.810996 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813"} err="failed to get container status \"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813\": rpc error: code = NotFound desc = could not find container \"9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813\": container with ID starting with 9d2cf4aa1994c648387d6bb60ffd2d1e6a0c2f80d1819b59239cb3f83cb39813 not found: ID does not exist" Feb 02 15:25:29 crc kubenswrapper[4869]: I0202 15:25:29.470708 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:29 crc kubenswrapper[4869]: E0202 15:25:29.470973 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:29 crc kubenswrapper[4869]: I0202 15:25:29.474061 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" path="/var/lib/kubelet/pods/74249215-4cd6-45b3-b2ab-6aa245e963f2/volumes" Feb 02 15:25:30 crc kubenswrapper[4869]: I0202 15:25:30.338192 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 02 15:25:35 crc kubenswrapper[4869]: I0202 15:25:35.739013 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 15:25:39 crc kubenswrapper[4869]: I0202 15:25:39.365511 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 02 15:25:42 crc kubenswrapper[4869]: I0202 15:25:42.463314 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:42 crc kubenswrapper[4869]: E0202 15:25:42.464065 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:25:54 crc kubenswrapper[4869]: I0202 15:25:54.463023 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:25:54 crc kubenswrapper[4869]: E0202 15:25:54.463701 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:09 crc kubenswrapper[4869]: I0202 15:26:09.476367 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:09 crc kubenswrapper[4869]: E0202 15:26:09.486536 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:22 crc kubenswrapper[4869]: I0202 15:26:22.463344 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:22 crc kubenswrapper[4869]: E0202 15:26:22.464190 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:37 crc kubenswrapper[4869]: I0202 15:26:37.463132 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:37 crc kubenswrapper[4869]: E0202 15:26:37.463963 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.927375 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 15:26:42 crc kubenswrapper[4869]: E0202 15:26:42.928409 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928427 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" Feb 02 15:26:42 crc kubenswrapper[4869]: E0202 15:26:42.928453 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928462 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928700 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.928723 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="74249215-4cd6-45b3-b2ab-6aa245e963f2" containerName="horizon-log" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.929529 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.931965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-72k4z" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.932642 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.932836 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.934829 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 15:26:42 crc kubenswrapper[4869]: I0202 15:26:42.952889 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101322 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101661 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.101959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204181 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204227 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.204830 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205577 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205723 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.205740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.211066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.214639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.214707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.224173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.241181 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.267454 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.709060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 15:26:43 crc kubenswrapper[4869]: I0202 15:26:43.716126 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:26:44 crc kubenswrapper[4869]: I0202 15:26:44.341179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerStarted","Data":"c08d2dd97b8a58de7b4399802e9fdd669c46ddb7f1d0f2a64a4f17afc41bb15d"} Feb 02 15:26:49 crc kubenswrapper[4869]: I0202 15:26:49.470617 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:26:49 crc kubenswrapper[4869]: E0202 15:26:49.471559 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:27:01 crc kubenswrapper[4869]: I0202 15:27:01.462362 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:27:01 crc kubenswrapper[4869]: E0202 15:27:01.463295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:27:16 crc kubenswrapper[4869]: I0202 15:27:16.462514 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.074898 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.075338 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh7qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(1ccbb21f-23d9-48be-a212-547e064326f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.076566 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" Feb 02 15:27:17 crc kubenswrapper[4869]: I0202 15:27:17.674550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3"} Feb 02 15:27:17 crc kubenswrapper[4869]: E0202 15:27:17.678042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" Feb 02 15:27:31 crc kubenswrapper[4869]: I0202 15:27:31.959474 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 15:27:33 crc kubenswrapper[4869]: I0202 15:27:33.864207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerStarted","Data":"ac9a60d8c10f53a0410a3a801abad85986e73c2832d375d41caefea008863171"} Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.022800 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.782522502 podStartE2EDuration="53.022746138s" podCreationTimestamp="2026-02-02 15:26:41 +0000 UTC" firstStartedPulling="2026-02-02 15:26:43.715852606 +0000 UTC m=+3205.360489376" lastFinishedPulling="2026-02-02 15:27:31.956076232 +0000 UTC m=+3253.600713012" observedRunningTime="2026-02-02 15:27:33.884777272 +0000 UTC m=+3255.529414082" watchObservedRunningTime="2026-02-02 15:27:34.022746138 +0000 UTC m=+3255.667382948" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.033783 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.038434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.052739 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.181088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.181142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.181250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.282996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.283592 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.304849 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"redhat-operators-t626s\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.403995 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:34 crc kubenswrapper[4869]: I0202 15:27:34.869312 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:27:35 crc kubenswrapper[4869]: I0202 15:27:35.883674 4869 generic.go:334] "Generic (PLEG): container finished" podID="994000fc-8ba9-47d0-a120-3283878441d5" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" exitCode=0 Feb 02 15:27:35 crc kubenswrapper[4869]: I0202 15:27:35.883715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2"} Feb 02 15:27:35 crc kubenswrapper[4869]: I0202 15:27:35.883986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerStarted","Data":"b588669e13cddff568ae7057846a90811cd14fb59157179225e707a0db9a55e1"} Feb 02 15:27:36 crc kubenswrapper[4869]: I0202 15:27:36.894980 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerStarted","Data":"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1"} Feb 02 15:27:39 crc kubenswrapper[4869]: I0202 15:27:39.923181 4869 generic.go:334] "Generic (PLEG): container finished" podID="994000fc-8ba9-47d0-a120-3283878441d5" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" exitCode=0 Feb 02 15:27:39 crc kubenswrapper[4869]: I0202 15:27:39.923226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1"} Feb 02 15:27:40 crc kubenswrapper[4869]: I0202 15:27:40.936345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerStarted","Data":"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a"} Feb 02 15:27:40 crc kubenswrapper[4869]: I0202 15:27:40.962693 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t626s" podStartSLOduration=3.433605078 podStartE2EDuration="7.962666506s" podCreationTimestamp="2026-02-02 15:27:33 +0000 UTC" firstStartedPulling="2026-02-02 15:27:35.88614264 +0000 UTC m=+3257.530779410" lastFinishedPulling="2026-02-02 15:27:40.415204068 +0000 UTC m=+3262.059840838" observedRunningTime="2026-02-02 15:27:40.957679584 +0000 UTC m=+3262.602316364" watchObservedRunningTime="2026-02-02 15:27:40.962666506 +0000 UTC m=+3262.607303286" Feb 02 15:27:44 crc kubenswrapper[4869]: I0202 15:27:44.405093 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:44 crc kubenswrapper[4869]: I0202 15:27:44.405651 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:27:45 crc kubenswrapper[4869]: I0202 15:27:45.453758 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t626s" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" probeResult="failure" output=< Feb 02 15:27:45 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:27:45 crc kubenswrapper[4869]: > Feb 02 15:27:55 crc kubenswrapper[4869]: I0202 15:27:55.457000 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t626s" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" probeResult="failure" output=< Feb 02 15:27:55 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:27:55 crc kubenswrapper[4869]: > Feb 02 15:28:04 crc kubenswrapper[4869]: I0202 15:28:04.449517 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:04 crc kubenswrapper[4869]: I0202 15:28:04.513972 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:05 crc kubenswrapper[4869]: I0202 15:28:05.225783 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.200433 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t626s" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" containerID="cri-o://1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" gracePeriod=2 Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.655130 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.741427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") pod \"994000fc-8ba9-47d0-a120-3283878441d5\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.741814 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") pod \"994000fc-8ba9-47d0-a120-3283878441d5\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.741973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") pod \"994000fc-8ba9-47d0-a120-3283878441d5\" (UID: \"994000fc-8ba9-47d0-a120-3283878441d5\") " Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.742970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities" (OuterVolumeSpecName: "utilities") pod "994000fc-8ba9-47d0-a120-3283878441d5" (UID: "994000fc-8ba9-47d0-a120-3283878441d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.748706 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6" (OuterVolumeSpecName: "kube-api-access-h5kr6") pod "994000fc-8ba9-47d0-a120-3283878441d5" (UID: "994000fc-8ba9-47d0-a120-3283878441d5"). InnerVolumeSpecName "kube-api-access-h5kr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.844197 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5kr6\" (UniqueName: \"kubernetes.io/projected/994000fc-8ba9-47d0-a120-3283878441d5-kube-api-access-h5kr6\") on node \"crc\" DevicePath \"\"" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.844240 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.868252 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "994000fc-8ba9-47d0-a120-3283878441d5" (UID: "994000fc-8ba9-47d0-a120-3283878441d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:28:06 crc kubenswrapper[4869]: I0202 15:28:06.947085 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/994000fc-8ba9-47d0-a120-3283878441d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216236 4869 generic.go:334] "Generic (PLEG): container finished" podID="994000fc-8ba9-47d0-a120-3283878441d5" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" exitCode=0 Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a"} Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t626s" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t626s" event={"ID":"994000fc-8ba9-47d0-a120-3283878441d5","Type":"ContainerDied","Data":"b588669e13cddff568ae7057846a90811cd14fb59157179225e707a0db9a55e1"} Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.216388 4869 scope.go:117] "RemoveContainer" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.247932 4869 scope.go:117] "RemoveContainer" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.281042 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.300822 4869 scope.go:117] "RemoveContainer" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.314628 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t626s"] Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.358430 4869 scope.go:117] "RemoveContainer" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" Feb 02 15:28:07 crc kubenswrapper[4869]: E0202 15:28:07.358935 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a\": container with ID starting with 1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a not found: ID does not exist" containerID="1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.358966 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a"} err="failed to get container status \"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a\": rpc error: code = NotFound desc = could not find container \"1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a\": container with ID starting with 1951cbcd6f178195c484f52a3cc43e6cae52d3844259297ba9ed86275de7d32a not found: ID does not exist" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.358989 4869 scope.go:117] "RemoveContainer" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" Feb 02 15:28:07 crc kubenswrapper[4869]: E0202 15:28:07.359989 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1\": container with ID starting with 49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1 not found: ID does not exist" containerID="49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.360043 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1"} err="failed to get container status \"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1\": rpc error: code = NotFound desc = could not find container \"49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1\": container with ID starting with 49bb0348c67e05a476c2a95d44512930d9dbbbe0f1086e10e8082ae124a0a9a1 not found: ID does not exist" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.360078 4869 scope.go:117] "RemoveContainer" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" Feb 02 15:28:07 crc kubenswrapper[4869]: E0202 15:28:07.360446 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2\": container with ID starting with c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2 not found: ID does not exist" containerID="c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.360486 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2"} err="failed to get container status \"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2\": rpc error: code = NotFound desc = could not find container \"c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2\": container with ID starting with c5d8f547600d2f62708e53819174b961bd2d2336d5f501735cf50cbdf194a1f2 not found: ID does not exist" Feb 02 15:28:07 crc kubenswrapper[4869]: I0202 15:28:07.475937 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994000fc-8ba9-47d0-a120-3283878441d5" path="/var/lib/kubelet/pods/994000fc-8ba9-47d0-a120-3283878441d5/volumes" Feb 02 15:29:45 crc kubenswrapper[4869]: I0202 15:29:45.303678 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:29:45 crc kubenswrapper[4869]: I0202 15:29:45.304317 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.158067 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr"] Feb 02 15:30:00 crc kubenswrapper[4869]: E0202 15:30:00.159054 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-utilities" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159071 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-utilities" Feb 02 15:30:00 crc kubenswrapper[4869]: E0202 15:30:00.159100 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159109 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" Feb 02 15:30:00 crc kubenswrapper[4869]: E0202 15:30:00.159119 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-content" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159128 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="extract-content" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.159357 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="994000fc-8ba9-47d0-a120-3283878441d5" containerName="registry-server" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.160279 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.162467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.162663 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.168304 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr"] Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.249444 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.249536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.249676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.351334 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.351417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.351536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.352440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.360743 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.373773 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"collect-profiles-29500770-49zxr\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.495634 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:00 crc kubenswrapper[4869]: I0202 15:30:00.960598 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr"] Feb 02 15:30:01 crc kubenswrapper[4869]: I0202 15:30:01.302302 4869 generic.go:334] "Generic (PLEG): container finished" podID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerID="649f59c8bcd1fef60a0e269541fe8492287d8caf17da4acdbee1c9eb014035eb" exitCode=0 Feb 02 15:30:01 crc kubenswrapper[4869]: I0202 15:30:01.302405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" event={"ID":"869f1f5c-3365-4b92-8459-76f5a3a9611f","Type":"ContainerDied","Data":"649f59c8bcd1fef60a0e269541fe8492287d8caf17da4acdbee1c9eb014035eb"} Feb 02 15:30:01 crc kubenswrapper[4869]: I0202 15:30:01.302716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" event={"ID":"869f1f5c-3365-4b92-8459-76f5a3a9611f","Type":"ContainerStarted","Data":"33d683832c70e6846d1828ccb1ccb48cffbd7b101d8ecc2f6e1707278aaf3017"} Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.792495 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.908248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") pod \"869f1f5c-3365-4b92-8459-76f5a3a9611f\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.908839 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") pod \"869f1f5c-3365-4b92-8459-76f5a3a9611f\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.909019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") pod \"869f1f5c-3365-4b92-8459-76f5a3a9611f\" (UID: \"869f1f5c-3365-4b92-8459-76f5a3a9611f\") " Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.909033 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume" (OuterVolumeSpecName: "config-volume") pod "869f1f5c-3365-4b92-8459-76f5a3a9611f" (UID: "869f1f5c-3365-4b92-8459-76f5a3a9611f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.909577 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869f1f5c-3365-4b92-8459-76f5a3a9611f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.914687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4" (OuterVolumeSpecName: "kube-api-access-6c7s4") pod "869f1f5c-3365-4b92-8459-76f5a3a9611f" (UID: "869f1f5c-3365-4b92-8459-76f5a3a9611f"). InnerVolumeSpecName "kube-api-access-6c7s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:30:02 crc kubenswrapper[4869]: I0202 15:30:02.918123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "869f1f5c-3365-4b92-8459-76f5a3a9611f" (UID: "869f1f5c-3365-4b92-8459-76f5a3a9611f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.011963 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869f1f5c-3365-4b92-8459-76f5a3a9611f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.011995 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c7s4\" (UniqueName: \"kubernetes.io/projected/869f1f5c-3365-4b92-8459-76f5a3a9611f-kube-api-access-6c7s4\") on node \"crc\" DevicePath \"\"" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.323099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" event={"ID":"869f1f5c-3365-4b92-8459-76f5a3a9611f","Type":"ContainerDied","Data":"33d683832c70e6846d1828ccb1ccb48cffbd7b101d8ecc2f6e1707278aaf3017"} Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.323148 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d683832c70e6846d1828ccb1ccb48cffbd7b101d8ecc2f6e1707278aaf3017" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.323183 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500770-49zxr" Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.882662 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 15:30:03 crc kubenswrapper[4869]: I0202 15:30:03.907567 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500725-v4bfh"] Feb 02 15:30:05 crc kubenswrapper[4869]: I0202 15:30:05.479020 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a6eca8-9d17-4791-add2-36c7119da5a5" path="/var/lib/kubelet/pods/f4a6eca8-9d17-4791-add2-36c7119da5a5/volumes" Feb 02 15:30:15 crc kubenswrapper[4869]: I0202 15:30:15.304189 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:30:15 crc kubenswrapper[4869]: I0202 15:30:15.304782 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:30:31 crc kubenswrapper[4869]: I0202 15:30:31.164137 4869 scope.go:117] "RemoveContainer" containerID="28b9935993b50888d9171d31e34b1e8a7654cd4a7e60abd6660f4755c8d99b31" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.304712 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.305160 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.305287 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.306047 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.306115 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3" gracePeriod=600 Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.713833 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3" exitCode=0 Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.713917 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3"} Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.714262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580"} Feb 02 15:30:45 crc kubenswrapper[4869]: I0202 15:30:45.714293 4869 scope.go:117] "RemoveContainer" containerID="c9e370b0938c245f2070cade2c4f558635acc074458a6c23f25a29fb8154c1eb" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.316316 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:06 crc kubenswrapper[4869]: E0202 15:31:06.317661 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerName="collect-profiles" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.317685 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerName="collect-profiles" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.318262 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="869f1f5c-3365-4b92-8459-76f5a3a9611f" containerName="collect-profiles" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.321043 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.331139 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.422479 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.422587 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.422637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525113 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525255 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.525902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.546187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"certified-operators-5q84c\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:06 crc kubenswrapper[4869]: I0202 15:31:06.670521 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.221743 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.915359 4869 generic.go:334] "Generic (PLEG): container finished" podID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" exitCode=0 Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.915401 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15"} Feb 02 15:31:07 crc kubenswrapper[4869]: I0202 15:31:07.915683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerStarted","Data":"277152112f332f87ac8340ae43964e01f486c32a7b4f6924bbdad83677a450a2"} Feb 02 15:31:09 crc kubenswrapper[4869]: I0202 15:31:09.939787 4869 generic.go:334] "Generic (PLEG): container finished" podID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" exitCode=0 Feb 02 15:31:09 crc kubenswrapper[4869]: I0202 15:31:09.939863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e"} Feb 02 15:31:10 crc kubenswrapper[4869]: I0202 15:31:10.955974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerStarted","Data":"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33"} Feb 02 15:31:10 crc kubenswrapper[4869]: I0202 15:31:10.987889 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5q84c" podStartSLOduration=2.438247945 podStartE2EDuration="4.987861201s" podCreationTimestamp="2026-02-02 15:31:06 +0000 UTC" firstStartedPulling="2026-02-02 15:31:07.917097622 +0000 UTC m=+3469.561734392" lastFinishedPulling="2026-02-02 15:31:10.466710878 +0000 UTC m=+3472.111347648" observedRunningTime="2026-02-02 15:31:10.974654999 +0000 UTC m=+3472.619291809" watchObservedRunningTime="2026-02-02 15:31:10.987861201 +0000 UTC m=+3472.632497971" Feb 02 15:31:16 crc kubenswrapper[4869]: I0202 15:31:16.671411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:16 crc kubenswrapper[4869]: I0202 15:31:16.672765 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:16 crc kubenswrapper[4869]: I0202 15:31:16.733283 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:17 crc kubenswrapper[4869]: I0202 15:31:17.090548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:17 crc kubenswrapper[4869]: I0202 15:31:17.489703 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.050497 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5q84c" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" containerID="cri-o://d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" gracePeriod=2 Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.650328 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.834158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") pod \"41426242-9734-4a7d-a77f-3d0b2ef6b467\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.834561 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") pod \"41426242-9734-4a7d-a77f-3d0b2ef6b467\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.834658 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") pod \"41426242-9734-4a7d-a77f-3d0b2ef6b467\" (UID: \"41426242-9734-4a7d-a77f-3d0b2ef6b467\") " Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.835271 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities" (OuterVolumeSpecName: "utilities") pod "41426242-9734-4a7d-a77f-3d0b2ef6b467" (UID: "41426242-9734-4a7d-a77f-3d0b2ef6b467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.843274 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc" (OuterVolumeSpecName: "kube-api-access-65cvc") pod "41426242-9734-4a7d-a77f-3d0b2ef6b467" (UID: "41426242-9734-4a7d-a77f-3d0b2ef6b467"). InnerVolumeSpecName "kube-api-access-65cvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.880659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41426242-9734-4a7d-a77f-3d0b2ef6b467" (UID: "41426242-9734-4a7d-a77f-3d0b2ef6b467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.936959 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.937005 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41426242-9734-4a7d-a77f-3d0b2ef6b467-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:19 crc kubenswrapper[4869]: I0202 15:31:19.937020 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65cvc\" (UniqueName: \"kubernetes.io/projected/41426242-9734-4a7d-a77f-3d0b2ef6b467-kube-api-access-65cvc\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061824 4869 generic.go:334] "Generic (PLEG): container finished" podID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" exitCode=0 Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33"} Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061884 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q84c" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q84c" event={"ID":"41426242-9734-4a7d-a77f-3d0b2ef6b467","Type":"ContainerDied","Data":"277152112f332f87ac8340ae43964e01f486c32a7b4f6924bbdad83677a450a2"} Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.061938 4869 scope.go:117] "RemoveContainer" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.086405 4869 scope.go:117] "RemoveContainer" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.102790 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.111225 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5q84c"] Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.127273 4869 scope.go:117] "RemoveContainer" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.176838 4869 scope.go:117] "RemoveContainer" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" Feb 02 15:31:20 crc kubenswrapper[4869]: E0202 15:31:20.177719 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33\": container with ID starting with d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33 not found: ID does not exist" containerID="d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.177763 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33"} err="failed to get container status \"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33\": rpc error: code = NotFound desc = could not find container \"d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33\": container with ID starting with d123fe99f98a678ae9cd7fca22fa656e38a437ae1a12ec7b961291717baada33 not found: ID does not exist" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.177785 4869 scope.go:117] "RemoveContainer" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" Feb 02 15:31:20 crc kubenswrapper[4869]: E0202 15:31:20.178044 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e\": container with ID starting with 266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e not found: ID does not exist" containerID="266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.178067 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e"} err="failed to get container status \"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e\": rpc error: code = NotFound desc = could not find container \"266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e\": container with ID starting with 266bdd672646588af5fa8eb39f630b314695f3c78c11866133d6f4c2617b8b7e not found: ID does not exist" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.178083 4869 scope.go:117] "RemoveContainer" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" Feb 02 15:31:20 crc kubenswrapper[4869]: E0202 15:31:20.178271 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15\": container with ID starting with 843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15 not found: ID does not exist" containerID="843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15" Feb 02 15:31:20 crc kubenswrapper[4869]: I0202 15:31:20.178290 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15"} err="failed to get container status \"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15\": rpc error: code = NotFound desc = could not find container \"843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15\": container with ID starting with 843d6fd5fe7277eabe88b5606ed33435694add409ace0eedb0b02e88e84e8f15 not found: ID does not exist" Feb 02 15:31:21 crc kubenswrapper[4869]: I0202 15:31:21.474754 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" path="/var/lib/kubelet/pods/41426242-9734-4a7d-a77f-3d0b2ef6b467/volumes" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.286612 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:28 crc kubenswrapper[4869]: E0202 15:31:28.287449 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-content" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287461 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-content" Feb 02 15:31:28 crc kubenswrapper[4869]: E0202 15:31:28.287473 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-utilities" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287483 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="extract-utilities" Feb 02 15:31:28 crc kubenswrapper[4869]: E0202 15:31:28.287543 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287549 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.287714 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="41426242-9734-4a7d-a77f-3d0b2ef6b467" containerName="registry-server" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.289024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.299916 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.426932 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.427375 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.427582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.529991 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.530072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.551111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"community-operators-wpwnv\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:28 crc kubenswrapper[4869]: I0202 15:31:28.624955 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:29 crc kubenswrapper[4869]: I0202 15:31:29.216699 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:29 crc kubenswrapper[4869]: W0202 15:31:29.232365 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ea1ffb_7462_485c_855c_ae3a5742ea5c.slice/crio-884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86 WatchSource:0}: Error finding container 884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86: Status 404 returned error can't find the container with id 884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86 Feb 02 15:31:30 crc kubenswrapper[4869]: I0202 15:31:30.150127 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" exitCode=0 Feb 02 15:31:30 crc kubenswrapper[4869]: I0202 15:31:30.151081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82"} Feb 02 15:31:30 crc kubenswrapper[4869]: I0202 15:31:30.151139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerStarted","Data":"884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86"} Feb 02 15:31:32 crc kubenswrapper[4869]: I0202 15:31:32.170684 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" exitCode=0 Feb 02 15:31:32 crc kubenswrapper[4869]: I0202 15:31:32.170819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1"} Feb 02 15:31:33 crc kubenswrapper[4869]: I0202 15:31:33.182555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerStarted","Data":"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9"} Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.625717 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.626089 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.673934 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:38 crc kubenswrapper[4869]: I0202 15:31:38.721457 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wpwnv" podStartSLOduration=8.293728815 podStartE2EDuration="10.721441027s" podCreationTimestamp="2026-02-02 15:31:28 +0000 UTC" firstStartedPulling="2026-02-02 15:31:30.153585219 +0000 UTC m=+3491.798221989" lastFinishedPulling="2026-02-02 15:31:32.581297431 +0000 UTC m=+3494.225934201" observedRunningTime="2026-02-02 15:31:33.201947411 +0000 UTC m=+3494.846584191" watchObservedRunningTime="2026-02-02 15:31:38.721441027 +0000 UTC m=+3500.366077797" Feb 02 15:31:39 crc kubenswrapper[4869]: I0202 15:31:39.291100 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:40 crc kubenswrapper[4869]: I0202 15:31:40.686048 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:41 crc kubenswrapper[4869]: I0202 15:31:41.261891 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wpwnv" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" containerID="cri-o://add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" gracePeriod=2 Feb 02 15:31:41 crc kubenswrapper[4869]: I0202 15:31:41.895706 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.028646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") pod \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.028777 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") pod \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.029700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities" (OuterVolumeSpecName: "utilities") pod "d6ea1ffb-7462-485c-855c-ae3a5742ea5c" (UID: "d6ea1ffb-7462-485c-855c-ae3a5742ea5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.029757 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") pod \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\" (UID: \"d6ea1ffb-7462-485c-855c-ae3a5742ea5c\") " Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.030542 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.036127 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk" (OuterVolumeSpecName: "kube-api-access-mzppk") pod "d6ea1ffb-7462-485c-855c-ae3a5742ea5c" (UID: "d6ea1ffb-7462-485c-855c-ae3a5742ea5c"). InnerVolumeSpecName "kube-api-access-mzppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.084352 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6ea1ffb-7462-485c-855c-ae3a5742ea5c" (UID: "d6ea1ffb-7462-485c-855c-ae3a5742ea5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.133318 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzppk\" (UniqueName: \"kubernetes.io/projected/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-kube-api-access-mzppk\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.133376 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ea1ffb-7462-485c-855c-ae3a5742ea5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272384 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" exitCode=0 Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9"} Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272729 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wpwnv" event={"ID":"d6ea1ffb-7462-485c-855c-ae3a5742ea5c","Type":"ContainerDied","Data":"884b5d1130d1e02611cff650ba174ff0c351db96a3e9440fb17b3bee48713f86"} Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272753 4869 scope.go:117] "RemoveContainer" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.272498 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wpwnv" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.299966 4869 scope.go:117] "RemoveContainer" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.317170 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.329671 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wpwnv"] Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.332749 4869 scope.go:117] "RemoveContainer" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.385092 4869 scope.go:117] "RemoveContainer" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" Feb 02 15:31:42 crc kubenswrapper[4869]: E0202 15:31:42.385771 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9\": container with ID starting with add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9 not found: ID does not exist" containerID="add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.385801 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9"} err="failed to get container status \"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9\": rpc error: code = NotFound desc = could not find container \"add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9\": container with ID starting with add5d5a97256122ab5a070065abc8bdf51cdc7c45a6b87ba1ac6f34ea7b891d9 not found: ID does not exist" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.385835 4869 scope.go:117] "RemoveContainer" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" Feb 02 15:31:42 crc kubenswrapper[4869]: E0202 15:31:42.386231 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1\": container with ID starting with ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1 not found: ID does not exist" containerID="ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.386251 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1"} err="failed to get container status \"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1\": rpc error: code = NotFound desc = could not find container \"ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1\": container with ID starting with ebc540f71936303cc5561023df441ea429e98f113cdfb02cdd6a0cd8ee2197f1 not found: ID does not exist" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.386264 4869 scope.go:117] "RemoveContainer" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" Feb 02 15:31:42 crc kubenswrapper[4869]: E0202 15:31:42.386761 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82\": container with ID starting with fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82 not found: ID does not exist" containerID="fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82" Feb 02 15:31:42 crc kubenswrapper[4869]: I0202 15:31:42.386839 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82"} err="failed to get container status \"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82\": rpc error: code = NotFound desc = could not find container \"fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82\": container with ID starting with fc3211d3dbc8c60d9993488fdc3ca85f718592662b5b791a4b1a17342ea76c82 not found: ID does not exist" Feb 02 15:31:43 crc kubenswrapper[4869]: I0202 15:31:43.477482 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" path="/var/lib/kubelet/pods/d6ea1ffb-7462-485c-855c-ae3a5742ea5c/volumes" Feb 02 15:32:45 crc kubenswrapper[4869]: I0202 15:32:45.305070 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:32:45 crc kubenswrapper[4869]: I0202 15:32:45.306198 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:33:15 crc kubenswrapper[4869]: I0202 15:33:15.304484 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:33:15 crc kubenswrapper[4869]: I0202 15:33:15.305098 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.304858 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.306176 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.306229 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.307063 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:33:45 crc kubenswrapper[4869]: I0202 15:33:45.307134 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" gracePeriod=600 Feb 02 15:33:45 crc kubenswrapper[4869]: E0202 15:33:45.426818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.422304 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" exitCode=0 Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.422403 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580"} Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.423485 4869 scope.go:117] "RemoveContainer" containerID="63c42435e11b3fe78de9cbdc67f20b6dae965f18557395875b4b59f4a3faf0c3" Feb 02 15:33:46 crc kubenswrapper[4869]: I0202 15:33:46.424326 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:33:46 crc kubenswrapper[4869]: E0202 15:33:46.425072 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:33:58 crc kubenswrapper[4869]: I0202 15:33:58.462425 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:33:58 crc kubenswrapper[4869]: E0202 15:33:58.463363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.051032 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.062478 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.078115 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-2vhkx"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.088024 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d921-account-create-update-shfv2"] Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.464104 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:13 crc kubenswrapper[4869]: E0202 15:34:13.464485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.477185 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b666475-dc9a-41e9-b087-b2042c2dd80f" path="/var/lib/kubelet/pods/5b666475-dc9a-41e9-b087-b2042c2dd80f/volumes" Feb 02 15:34:13 crc kubenswrapper[4869]: I0202 15:34:13.483600 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d70d6af-0f1a-40d1-b0aa-8896b8fcd607" path="/var/lib/kubelet/pods/8d70d6af-0f1a-40d1-b0aa-8896b8fcd607/volumes" Feb 02 15:34:28 crc kubenswrapper[4869]: I0202 15:34:28.462478 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:28 crc kubenswrapper[4869]: E0202 15:34:28.463229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:31 crc kubenswrapper[4869]: I0202 15:34:31.350118 4869 scope.go:117] "RemoveContainer" containerID="f6a65d674c18b4d91e1a4a5378741c663bb46842c68ee5b840ab49a144aef022" Feb 02 15:34:31 crc kubenswrapper[4869]: I0202 15:34:31.452844 4869 scope.go:117] "RemoveContainer" containerID="e2b3a08d13bb54ca12a353c801a13c65fca6c0e6e63916392001244a909d1156" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.463825 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.464563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887135 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.887891 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-content" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887915 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-content" Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.887965 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-utilities" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887972 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="extract-utilities" Feb 02 15:34:40 crc kubenswrapper[4869]: E0202 15:34:40.887988 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.887994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.888168 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ea1ffb-7462-485c-855c-ae3a5742ea5c" containerName="registry-server" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.889456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.918229 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.962140 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.962238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:40 crc kubenswrapper[4869]: I0202 15:34:40.962265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.063861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.063927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.064126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.064368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.064474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.086874 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"redhat-marketplace-vj5c5\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.222757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.799601 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:41 crc kubenswrapper[4869]: W0202 15:34:41.811461 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb04306a_2210_4490_b163_3d8914b6478a.slice/crio-356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5 WatchSource:0}: Error finding container 356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5: Status 404 returned error can't find the container with id 356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5 Feb 02 15:34:41 crc kubenswrapper[4869]: I0202 15:34:41.953870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerStarted","Data":"356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5"} Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.042515 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.055299 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-jf2x2"] Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.969166 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb04306a-2210-4490-b163-3d8914b6478a" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" exitCode=0 Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.969316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9"} Feb 02 15:34:42 crc kubenswrapper[4869]: I0202 15:34:42.971819 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:34:43 crc kubenswrapper[4869]: I0202 15:34:43.473969 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b453d3-88d6-4fd5-bedc-62e0d4270f20" path="/var/lib/kubelet/pods/d8b453d3-88d6-4fd5-bedc-62e0d4270f20/volumes" Feb 02 15:34:44 crc kubenswrapper[4869]: I0202 15:34:44.992444 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb04306a-2210-4490-b163-3d8914b6478a" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" exitCode=0 Feb 02 15:34:44 crc kubenswrapper[4869]: I0202 15:34:44.992519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b"} Feb 02 15:34:46 crc kubenswrapper[4869]: I0202 15:34:46.007612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerStarted","Data":"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0"} Feb 02 15:34:46 crc kubenswrapper[4869]: I0202 15:34:46.040948 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vj5c5" podStartSLOduration=3.552986741 podStartE2EDuration="6.040904987s" podCreationTimestamp="2026-02-02 15:34:40 +0000 UTC" firstStartedPulling="2026-02-02 15:34:42.971618464 +0000 UTC m=+3684.616255234" lastFinishedPulling="2026-02-02 15:34:45.45953671 +0000 UTC m=+3687.104173480" observedRunningTime="2026-02-02 15:34:46.029383856 +0000 UTC m=+3687.674020646" watchObservedRunningTime="2026-02-02 15:34:46.040904987 +0000 UTC m=+3687.685541757" Feb 02 15:34:51 crc kubenswrapper[4869]: I0202 15:34:51.223417 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:51 crc kubenswrapper[4869]: I0202 15:34:51.224824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:51 crc kubenswrapper[4869]: I0202 15:34:51.277271 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:52 crc kubenswrapper[4869]: I0202 15:34:52.101544 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:52 crc kubenswrapper[4869]: I0202 15:34:52.148720 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:52 crc kubenswrapper[4869]: I0202 15:34:52.462607 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:34:52 crc kubenswrapper[4869]: E0202 15:34:52.462959 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.076833 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vj5c5" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" containerID="cri-o://b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" gracePeriod=2 Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.796572 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.857228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") pod \"bb04306a-2210-4490-b163-3d8914b6478a\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.857363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") pod \"bb04306a-2210-4490-b163-3d8914b6478a\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.857655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") pod \"bb04306a-2210-4490-b163-3d8914b6478a\" (UID: \"bb04306a-2210-4490-b163-3d8914b6478a\") " Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.858458 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities" (OuterVolumeSpecName: "utilities") pod "bb04306a-2210-4490-b163-3d8914b6478a" (UID: "bb04306a-2210-4490-b163-3d8914b6478a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.869380 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p" (OuterVolumeSpecName: "kube-api-access-zmc4p") pod "bb04306a-2210-4490-b163-3d8914b6478a" (UID: "bb04306a-2210-4490-b163-3d8914b6478a"). InnerVolumeSpecName "kube-api-access-zmc4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.902926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb04306a-2210-4490-b163-3d8914b6478a" (UID: "bb04306a-2210-4490-b163-3d8914b6478a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.961150 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.961223 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmc4p\" (UniqueName: \"kubernetes.io/projected/bb04306a-2210-4490-b163-3d8914b6478a-kube-api-access-zmc4p\") on node \"crc\" DevicePath \"\"" Feb 02 15:34:54 crc kubenswrapper[4869]: I0202 15:34:54.961233 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb04306a-2210-4490-b163-3d8914b6478a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087083 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb04306a-2210-4490-b163-3d8914b6478a" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" exitCode=0 Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0"} Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vj5c5" event={"ID":"bb04306a-2210-4490-b163-3d8914b6478a","Type":"ContainerDied","Data":"356ad085784eec5b2c782192f3eeae39e0f5e34b172aa9898e6e8bb4ea2f62b5"} Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087178 4869 scope.go:117] "RemoveContainer" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.087208 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vj5c5" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.115948 4869 scope.go:117] "RemoveContainer" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.122555 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.135615 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vj5c5"] Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.172992 4869 scope.go:117] "RemoveContainer" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.196054 4869 scope.go:117] "RemoveContainer" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" Feb 02 15:34:55 crc kubenswrapper[4869]: E0202 15:34:55.196685 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0\": container with ID starting with b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0 not found: ID does not exist" containerID="b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.196747 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0"} err="failed to get container status \"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0\": rpc error: code = NotFound desc = could not find container \"b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0\": container with ID starting with b4cdff1287d581f472454a50159599aad4cb81532e5135b7a4f1aec7eac533a0 not found: ID does not exist" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.196785 4869 scope.go:117] "RemoveContainer" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" Feb 02 15:34:55 crc kubenswrapper[4869]: E0202 15:34:55.197238 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b\": container with ID starting with 1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b not found: ID does not exist" containerID="1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.197439 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b"} err="failed to get container status \"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b\": rpc error: code = NotFound desc = could not find container \"1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b\": container with ID starting with 1b3a996a4a5f2edea4b52a06ae61578f1b13b6e7a813aa7d568c006dcb50f78b not found: ID does not exist" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.197559 4869 scope.go:117] "RemoveContainer" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" Feb 02 15:34:55 crc kubenswrapper[4869]: E0202 15:34:55.198076 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9\": container with ID starting with ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9 not found: ID does not exist" containerID="ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.198113 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9"} err="failed to get container status \"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9\": rpc error: code = NotFound desc = could not find container \"ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9\": container with ID starting with ef3d0c86a8489a97ab7e9e3db4375489cca605ee5aa5882b2b3fd4f8190096a9 not found: ID does not exist" Feb 02 15:34:55 crc kubenswrapper[4869]: I0202 15:34:55.475213 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb04306a-2210-4490-b163-3d8914b6478a" path="/var/lib/kubelet/pods/bb04306a-2210-4490-b163-3d8914b6478a/volumes" Feb 02 15:35:03 crc kubenswrapper[4869]: I0202 15:35:03.462705 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:03 crc kubenswrapper[4869]: E0202 15:35:03.463384 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:15 crc kubenswrapper[4869]: I0202 15:35:15.464028 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:15 crc kubenswrapper[4869]: E0202 15:35:15.464671 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:28 crc kubenswrapper[4869]: I0202 15:35:28.463030 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:28 crc kubenswrapper[4869]: E0202 15:35:28.463848 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:31 crc kubenswrapper[4869]: I0202 15:35:31.572286 4869 scope.go:117] "RemoveContainer" containerID="5948d840f279d95c368e5ad5e8fcf13a024cb24a66d211ff6dee2d8bb1e46f72" Feb 02 15:35:43 crc kubenswrapper[4869]: I0202 15:35:43.462774 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:43 crc kubenswrapper[4869]: E0202 15:35:43.463811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:35:56 crc kubenswrapper[4869]: I0202 15:35:56.462362 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:35:56 crc kubenswrapper[4869]: E0202 15:35:56.463323 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:08 crc kubenswrapper[4869]: I0202 15:36:08.462700 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:08 crc kubenswrapper[4869]: E0202 15:36:08.463989 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:21 crc kubenswrapper[4869]: I0202 15:36:21.462701 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:21 crc kubenswrapper[4869]: E0202 15:36:21.463538 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:36 crc kubenswrapper[4869]: I0202 15:36:36.462931 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:36 crc kubenswrapper[4869]: E0202 15:36:36.463756 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:36:50 crc kubenswrapper[4869]: I0202 15:36:50.462759 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:36:50 crc kubenswrapper[4869]: E0202 15:36:50.463783 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:03 crc kubenswrapper[4869]: I0202 15:37:03.463388 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:03 crc kubenswrapper[4869]: E0202 15:37:03.464333 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:14 crc kubenswrapper[4869]: I0202 15:37:14.462735 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:14 crc kubenswrapper[4869]: E0202 15:37:14.463551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:25 crc kubenswrapper[4869]: I0202 15:37:25.463858 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:25 crc kubenswrapper[4869]: E0202 15:37:25.464781 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:38 crc kubenswrapper[4869]: I0202 15:37:38.463779 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:38 crc kubenswrapper[4869]: E0202 15:37:38.469969 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:37:53 crc kubenswrapper[4869]: I0202 15:37:53.462725 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:37:53 crc kubenswrapper[4869]: E0202 15:37:53.463517 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:08 crc kubenswrapper[4869]: I0202 15:38:08.462665 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:08 crc kubenswrapper[4869]: E0202 15:38:08.463479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:23 crc kubenswrapper[4869]: I0202 15:38:23.462902 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:23 crc kubenswrapper[4869]: E0202 15:38:23.464013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:34 crc kubenswrapper[4869]: I0202 15:38:34.463313 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:34 crc kubenswrapper[4869]: E0202 15:38:34.464265 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:38:45 crc kubenswrapper[4869]: I0202 15:38:45.462585 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:38:46 crc kubenswrapper[4869]: I0202 15:38:46.077550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2"} Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.414996 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:38:54 crc kubenswrapper[4869]: E0202 15:38:54.416141 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-content" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416162 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-content" Feb 02 15:38:54 crc kubenswrapper[4869]: E0202 15:38:54.416175 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416183 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" Feb 02 15:38:54 crc kubenswrapper[4869]: E0202 15:38:54.416198 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-utilities" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416205 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="extract-utilities" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.416467 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb04306a-2210-4490-b163-3d8914b6478a" containerName="registry-server" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.418154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.423443 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.473713 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.473958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.474126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.575859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.575930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.576061 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.577037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.577094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.603979 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"redhat-operators-zkwj6\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:54 crc kubenswrapper[4869]: I0202 15:38:54.739815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:38:55 crc kubenswrapper[4869]: I0202 15:38:55.242866 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:38:56 crc kubenswrapper[4869]: I0202 15:38:56.158342 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" exitCode=0 Feb 02 15:38:56 crc kubenswrapper[4869]: I0202 15:38:56.158823 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66"} Feb 02 15:38:56 crc kubenswrapper[4869]: I0202 15:38:56.158851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerStarted","Data":"e24dbfec315c720223529ae8c9eb96fbd2221b4a094f19943a5217cff897c3dc"} Feb 02 15:38:58 crc kubenswrapper[4869]: I0202 15:38:58.176170 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerStarted","Data":"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38"} Feb 02 15:39:03 crc kubenswrapper[4869]: I0202 15:39:03.219649 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" exitCode=0 Feb 02 15:39:03 crc kubenswrapper[4869]: I0202 15:39:03.219684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38"} Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.230833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerStarted","Data":"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d"} Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.259451 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zkwj6" podStartSLOduration=2.770356494 podStartE2EDuration="10.259432394s" podCreationTimestamp="2026-02-02 15:38:54 +0000 UTC" firstStartedPulling="2026-02-02 15:38:56.160785397 +0000 UTC m=+3937.805422167" lastFinishedPulling="2026-02-02 15:39:03.649861297 +0000 UTC m=+3945.294498067" observedRunningTime="2026-02-02 15:39:04.25150286 +0000 UTC m=+3945.896139670" watchObservedRunningTime="2026-02-02 15:39:04.259432394 +0000 UTC m=+3945.904069164" Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.741386 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:04 crc kubenswrapper[4869]: I0202 15:39:04.741747 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:05 crc kubenswrapper[4869]: I0202 15:39:05.790495 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkwj6" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" probeResult="failure" output=< Feb 02 15:39:05 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:39:05 crc kubenswrapper[4869]: > Feb 02 15:39:14 crc kubenswrapper[4869]: I0202 15:39:14.818918 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:14 crc kubenswrapper[4869]: I0202 15:39:14.870289 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:15 crc kubenswrapper[4869]: I0202 15:39:15.055706 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:39:16 crc kubenswrapper[4869]: I0202 15:39:16.338442 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zkwj6" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" containerID="cri-o://51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" gracePeriod=2 Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.079729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.153436 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") pod \"b5b211ab-34d9-4892-9db6-55cd96a21407\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.153579 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") pod \"b5b211ab-34d9-4892-9db6-55cd96a21407\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.153703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") pod \"b5b211ab-34d9-4892-9db6-55cd96a21407\" (UID: \"b5b211ab-34d9-4892-9db6-55cd96a21407\") " Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.154540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities" (OuterVolumeSpecName: "utilities") pod "b5b211ab-34d9-4892-9db6-55cd96a21407" (UID: "b5b211ab-34d9-4892-9db6-55cd96a21407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.166064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr" (OuterVolumeSpecName: "kube-api-access-nc2gr") pod "b5b211ab-34d9-4892-9db6-55cd96a21407" (UID: "b5b211ab-34d9-4892-9db6-55cd96a21407"). InnerVolumeSpecName "kube-api-access-nc2gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.255933 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc2gr\" (UniqueName: \"kubernetes.io/projected/b5b211ab-34d9-4892-9db6-55cd96a21407-kube-api-access-nc2gr\") on node \"crc\" DevicePath \"\"" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.255973 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.291367 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5b211ab-34d9-4892-9db6-55cd96a21407" (UID: "b5b211ab-34d9-4892-9db6-55cd96a21407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348753 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" exitCode=0 Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d"} Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348827 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkwj6" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348847 4869 scope.go:117] "RemoveContainer" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.348836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkwj6" event={"ID":"b5b211ab-34d9-4892-9db6-55cd96a21407","Type":"ContainerDied","Data":"e24dbfec315c720223529ae8c9eb96fbd2221b4a094f19943a5217cff897c3dc"} Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.357869 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b211ab-34d9-4892-9db6-55cd96a21407-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.386703 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.392809 4869 scope.go:117] "RemoveContainer" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.396843 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zkwj6"] Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.424440 4869 scope.go:117] "RemoveContainer" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.484291 4869 scope.go:117] "RemoveContainer" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" Feb 02 15:39:17 crc kubenswrapper[4869]: E0202 15:39:17.485225 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d\": container with ID starting with 51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d not found: ID does not exist" containerID="51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485274 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d"} err="failed to get container status \"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d\": rpc error: code = NotFound desc = could not find container \"51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d\": container with ID starting with 51fe52f51584a9feb01af7ebac1078033f557cdb35e6a60f70ad7679c797982d not found: ID does not exist" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485329 4869 scope.go:117] "RemoveContainer" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" Feb 02 15:39:17 crc kubenswrapper[4869]: E0202 15:39:17.485694 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38\": container with ID starting with ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38 not found: ID does not exist" containerID="ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485730 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38"} err="failed to get container status \"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38\": rpc error: code = NotFound desc = could not find container \"ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38\": container with ID starting with ed098c800387845bef859e2a3c8a002ce3872f79ad644e5926cd7b6c26928b38 not found: ID does not exist" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.485761 4869 scope.go:117] "RemoveContainer" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" Feb 02 15:39:17 crc kubenswrapper[4869]: E0202 15:39:17.486140 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66\": container with ID starting with 3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66 not found: ID does not exist" containerID="3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.486166 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66"} err="failed to get container status \"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66\": rpc error: code = NotFound desc = could not find container \"3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66\": container with ID starting with 3cee8b82fa0ee0ab998b3a30e148ec6fb53160a60374857857633977e48b7f66 not found: ID does not exist" Feb 02 15:39:17 crc kubenswrapper[4869]: I0202 15:39:17.488053 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" path="/var/lib/kubelet/pods/b5b211ab-34d9-4892-9db6-55cd96a21407/volumes" Feb 02 15:40:45 crc kubenswrapper[4869]: I0202 15:40:45.304696 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:40:45 crc kubenswrapper[4869]: I0202 15:40:45.305183 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:41:15 crc kubenswrapper[4869]: I0202 15:41:15.304522 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:41:15 crc kubenswrapper[4869]: I0202 15:41:15.305119 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.304787 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.305370 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.305414 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.306140 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:41:45 crc kubenswrapper[4869]: I0202 15:41:45.306187 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2" gracePeriod=600 Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230035 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2" exitCode=0 Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2"} Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290"} Feb 02 15:41:46 crc kubenswrapper[4869]: I0202 15:41:46.230573 4869 scope.go:117] "RemoveContainer" containerID="375f130717f06bba0303cc122474f5b4164abb3d07dabdced18a0d36dce77580" Feb 02 15:43:45 crc kubenswrapper[4869]: I0202 15:43:45.304653 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:43:45 crc kubenswrapper[4869]: I0202 15:43:45.306107 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:44:15 crc kubenswrapper[4869]: I0202 15:44:15.304059 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:44:15 crc kubenswrapper[4869]: I0202 15:44:15.304609 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.238079 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.239192 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-content" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239211 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-content" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.239231 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239238 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.239247 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-utilities" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239259 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="extract-utilities" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.239550 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b211ab-34d9-4892-9db6-55cd96a21407" containerName="registry-server" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.240868 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.267493 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.290944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.291002 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.291050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.304401 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.304454 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.304498 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.305235 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.305301 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" gracePeriod=600 Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.392444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.392493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.392516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.393697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.394104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.676954 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.677530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"redhat-marketplace-jdwgt\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.777140 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" exitCode=0 Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.777204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290"} Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.777476 4869 scope.go:117] "RemoveContainer" containerID="7ebf6e72dc15d85e8d8fad016b9ee6c110aff020d8c5985297f93d81921148c2" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.778223 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:44:45 crc kubenswrapper[4869]: E0202 15:44:45.778522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:44:45 crc kubenswrapper[4869]: I0202 15:44:45.862969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.406802 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.792053 4869 generic.go:334] "Generic (PLEG): container finished" podID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerID="971cc8e1afaafc554bca06e5fb085210161555600145c8cb154b8f6945d40b46" exitCode=0 Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.792148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"971cc8e1afaafc554bca06e5fb085210161555600145c8cb154b8f6945d40b46"} Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.792466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerStarted","Data":"104f15ada5cdd6ac325cc93af5fc5d927ee4037a73902017cfebc94d03582b0c"} Feb 02 15:44:46 crc kubenswrapper[4869]: I0202 15:44:46.794850 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.439247 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.442003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.453946 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.533286 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.533923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.534152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.636836 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.637232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.639414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.637959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.640923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.641769 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.642430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.649326 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.668381 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"certified-operators-vfzjr\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.746903 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.747021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.747311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.765334 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.808582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerStarted","Data":"7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814"} Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.848752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.848870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.848898 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.849292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.849561 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.881306 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"community-operators-8f782\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:47 crc kubenswrapper[4869]: I0202 15:44:47.973520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.375183 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:44:48 crc kubenswrapper[4869]: W0202 15:44:48.388968 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb731e8d9_da5b_464a_9ef0_7cf6311056d4.slice/crio-3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf WatchSource:0}: Error finding container 3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf: Status 404 returned error can't find the container with id 3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.589875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:44:48 crc kubenswrapper[4869]: W0202 15:44:48.594039 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc34a43bb_26f9_41bb_8d40_7cd30e71525d.slice/crio-589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912 WatchSource:0}: Error finding container 589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912: Status 404 returned error can't find the container with id 589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912 Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.833692 4869 generic.go:334] "Generic (PLEG): container finished" podID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerID="7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814" exitCode=0 Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.834065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814"} Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.839071 4869 generic.go:334] "Generic (PLEG): container finished" podID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerID="f84224d4a0640bc9c4cadf8e36472e8fe09028de333f0ae6e883f54ed753862a" exitCode=0 Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.839191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"f84224d4a0640bc9c4cadf8e36472e8fe09028de333f0ae6e883f54ed753862a"} Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.839250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerStarted","Data":"3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf"} Feb 02 15:44:48 crc kubenswrapper[4869]: I0202 15:44:48.874538 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerStarted","Data":"589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912"} Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.884898 4869 generic.go:334] "Generic (PLEG): container finished" podID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerID="310c6f14696587aa249ead65052fe71a80bf5c91456e89be6fbb2af185a52ea5" exitCode=0 Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.884950 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"310c6f14696587aa249ead65052fe71a80bf5c91456e89be6fbb2af185a52ea5"} Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.890244 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerStarted","Data":"c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a"} Feb 02 15:44:49 crc kubenswrapper[4869]: I0202 15:44:49.930837 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdwgt" podStartSLOduration=2.3993174809999998 podStartE2EDuration="4.930814027s" podCreationTimestamp="2026-02-02 15:44:45 +0000 UTC" firstStartedPulling="2026-02-02 15:44:46.794641278 +0000 UTC m=+4288.439278048" lastFinishedPulling="2026-02-02 15:44:49.326137824 +0000 UTC m=+4290.970774594" observedRunningTime="2026-02-02 15:44:49.927010556 +0000 UTC m=+4291.571647336" watchObservedRunningTime="2026-02-02 15:44:49.930814027 +0000 UTC m=+4291.575450807" Feb 02 15:44:50 crc kubenswrapper[4869]: I0202 15:44:50.900605 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerStarted","Data":"d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd"} Feb 02 15:44:50 crc kubenswrapper[4869]: I0202 15:44:50.903853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerStarted","Data":"493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf"} Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.922225 4869 generic.go:334] "Generic (PLEG): container finished" podID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerID="493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf" exitCode=0 Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.922298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf"} Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.927227 4869 generic.go:334] "Generic (PLEG): container finished" podID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerID="d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd" exitCode=0 Feb 02 15:44:52 crc kubenswrapper[4869]: I0202 15:44:52.927281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd"} Feb 02 15:44:53 crc kubenswrapper[4869]: I0202 15:44:53.944379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerStarted","Data":"a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a"} Feb 02 15:44:53 crc kubenswrapper[4869]: I0202 15:44:53.950725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerStarted","Data":"d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383"} Feb 02 15:44:53 crc kubenswrapper[4869]: I0202 15:44:53.971615 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8f782" podStartSLOduration=3.272367858 podStartE2EDuration="6.971593756s" podCreationTimestamp="2026-02-02 15:44:47 +0000 UTC" firstStartedPulling="2026-02-02 15:44:49.886489346 +0000 UTC m=+4291.531126116" lastFinishedPulling="2026-02-02 15:44:53.585715244 +0000 UTC m=+4295.230352014" observedRunningTime="2026-02-02 15:44:53.969421014 +0000 UTC m=+4295.614057804" watchObservedRunningTime="2026-02-02 15:44:53.971593756 +0000 UTC m=+4295.616230516" Feb 02 15:44:54 crc kubenswrapper[4869]: I0202 15:44:54.007483 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfzjr" podStartSLOduration=2.327748343 podStartE2EDuration="7.007453814s" podCreationTimestamp="2026-02-02 15:44:47 +0000 UTC" firstStartedPulling="2026-02-02 15:44:48.854928867 +0000 UTC m=+4290.499565637" lastFinishedPulling="2026-02-02 15:44:53.534634338 +0000 UTC m=+4295.179271108" observedRunningTime="2026-02-02 15:44:53.98992863 +0000 UTC m=+4295.634565400" watchObservedRunningTime="2026-02-02 15:44:54.007453814 +0000 UTC m=+4295.652090584" Feb 02 15:44:55 crc kubenswrapper[4869]: I0202 15:44:55.863333 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:55 crc kubenswrapper[4869]: I0202 15:44:55.863383 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:55 crc kubenswrapper[4869]: I0202 15:44:55.918787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:56 crc kubenswrapper[4869]: I0202 15:44:56.034313 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.766399 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.767282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.820514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.974929 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:57 crc kubenswrapper[4869]: I0202 15:44:57.974979 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.032094 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.058075 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.080100 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:44:58 crc kubenswrapper[4869]: I0202 15:44:58.463008 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:44:58 crc kubenswrapper[4869]: E0202 15:44:58.463426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:44:59 crc kubenswrapper[4869]: I0202 15:44:59.634617 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:44:59 crc kubenswrapper[4869]: I0202 15:44:59.635220 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdwgt" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" containerID="cri-o://c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a" gracePeriod=2 Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.026791 4869 generic.go:334] "Generic (PLEG): container finished" podID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerID="c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a" exitCode=0 Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.028083 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a"} Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.197345 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v"] Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.199969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.202141 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222065 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.222397 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v"] Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.229205 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.251661 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.327346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.327441 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.327556 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.328853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.336545 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.354881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.354926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"collect-profiles-29500785-j4c7v\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.430720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.436820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") pod \"82ffd26c-f9c6-464b-bd85-24daabb4a361\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.442350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd" (OuterVolumeSpecName: "kube-api-access-ghqvd") pod "82ffd26c-f9c6-464b-bd85-24daabb4a361" (UID: "82ffd26c-f9c6-464b-bd85-24daabb4a361"). InnerVolumeSpecName "kube-api-access-ghqvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.540440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") pod \"82ffd26c-f9c6-464b-bd85-24daabb4a361\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.540552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") pod \"82ffd26c-f9c6-464b-bd85-24daabb4a361\" (UID: \"82ffd26c-f9c6-464b-bd85-24daabb4a361\") " Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.541235 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghqvd\" (UniqueName: \"kubernetes.io/projected/82ffd26c-f9c6-464b-bd85-24daabb4a361-kube-api-access-ghqvd\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.542341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities" (OuterVolumeSpecName: "utilities") pod "82ffd26c-f9c6-464b-bd85-24daabb4a361" (UID: "82ffd26c-f9c6-464b-bd85-24daabb4a361"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.570285 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82ffd26c-f9c6-464b-bd85-24daabb4a361" (UID: "82ffd26c-f9c6-464b-bd85-24daabb4a361"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.644013 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.644059 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ffd26c-f9c6-464b-bd85-24daabb4a361-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:00 crc kubenswrapper[4869]: I0202 15:45:00.896382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v"] Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.038141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerStarted","Data":"fe6079af63eb74c307e9b9ef6c867c7fbe4f9baf9bae3717acf0882ffd36e3bd"} Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040343 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdwgt" event={"ID":"82ffd26c-f9c6-464b-bd85-24daabb4a361","Type":"ContainerDied","Data":"104f15ada5cdd6ac325cc93af5fc5d927ee4037a73902017cfebc94d03582b0c"} Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040387 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdwgt" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040422 4869 scope.go:117] "RemoveContainer" containerID="c84e91320504832ac7bea6cb75bc644159c7b5cae320a517673bc1a26152bd7a" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.040541 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfzjr" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" containerID="cri-o://d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383" gracePeriod=2 Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.073204 4869 scope.go:117] "RemoveContainer" containerID="7edf29f67d8af5efb924ee99bc0c2e5f8d50256221aa538ac1bf2716b1104814" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.092725 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.102200 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdwgt"] Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.474328 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" path="/var/lib/kubelet/pods/82ffd26c-f9c6-464b-bd85-24daabb4a361/volumes" Feb 02 15:45:01 crc kubenswrapper[4869]: I0202 15:45:01.492264 4869 scope.go:117] "RemoveContainer" containerID="971cc8e1afaafc554bca06e5fb085210161555600145c8cb154b8f6945d40b46" Feb 02 15:45:02 crc kubenswrapper[4869]: I0202 15:45:02.025579 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:45:02 crc kubenswrapper[4869]: I0202 15:45:02.025815 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8f782" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" containerID="cri-o://a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a" gracePeriod=2 Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.068214 4869 generic.go:334] "Generic (PLEG): container finished" podID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerID="a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a" exitCode=0 Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.068424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a"} Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.070819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerStarted","Data":"3a4cc8364b5164f25f0a96a2c5e5007ac3dbe97a7db78fdaa9fad0c2ebcc3ea0"} Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.076392 4869 generic.go:334] "Generic (PLEG): container finished" podID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerID="d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383" exitCode=0 Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.076620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383"} Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.093068 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" podStartSLOduration=3.093041155 podStartE2EDuration="3.093041155s" podCreationTimestamp="2026-02-02 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 15:45:03.087266195 +0000 UTC m=+4304.731902975" watchObservedRunningTime="2026-02-02 15:45:03.093041155 +0000 UTC m=+4304.737677925" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.384662 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.393084 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504096 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") pod \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504417 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") pod \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") pod \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504693 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") pod \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504823 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") pod \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\" (UID: \"c34a43bb-26f9-41bb-8d40-7cd30e71525d\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.504885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") pod \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\" (UID: \"b731e8d9-da5b-464a-9ef0-7cf6311056d4\") " Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities" (OuterVolumeSpecName: "utilities") pod "c34a43bb-26f9-41bb-8d40-7cd30e71525d" (UID: "c34a43bb-26f9-41bb-8d40-7cd30e71525d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506189 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities" (OuterVolumeSpecName: "utilities") pod "b731e8d9-da5b-464a-9ef0-7cf6311056d4" (UID: "b731e8d9-da5b-464a-9ef0-7cf6311056d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506852 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.506887 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.511834 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx" (OuterVolumeSpecName: "kube-api-access-lwblx") pod "b731e8d9-da5b-464a-9ef0-7cf6311056d4" (UID: "b731e8d9-da5b-464a-9ef0-7cf6311056d4"). InnerVolumeSpecName "kube-api-access-lwblx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.523479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67" (OuterVolumeSpecName: "kube-api-access-r6x67") pod "c34a43bb-26f9-41bb-8d40-7cd30e71525d" (UID: "c34a43bb-26f9-41bb-8d40-7cd30e71525d"). InnerVolumeSpecName "kube-api-access-r6x67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.563343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c34a43bb-26f9-41bb-8d40-7cd30e71525d" (UID: "c34a43bb-26f9-41bb-8d40-7cd30e71525d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.567325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b731e8d9-da5b-464a-9ef0-7cf6311056d4" (UID: "b731e8d9-da5b-464a-9ef0-7cf6311056d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.608997 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwblx\" (UniqueName: \"kubernetes.io/projected/b731e8d9-da5b-464a-9ef0-7cf6311056d4-kube-api-access-lwblx\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.609030 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34a43bb-26f9-41bb-8d40-7cd30e71525d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.609039 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b731e8d9-da5b-464a-9ef0-7cf6311056d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:03 crc kubenswrapper[4869]: I0202 15:45:03.609048 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6x67\" (UniqueName: \"kubernetes.io/projected/c34a43bb-26f9-41bb-8d40-7cd30e71525d-kube-api-access-r6x67\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.088511 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfzjr" event={"ID":"b731e8d9-da5b-464a-9ef0-7cf6311056d4","Type":"ContainerDied","Data":"3549f78a1917972fee820a10062fb2f6ee89a3e5ecb5558de8ccd326dc989fbf"} Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.088567 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfzjr" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.088860 4869 scope.go:117] "RemoveContainer" containerID="d65f83d30f68ce00caa7e34ad6aec911f8916be97c6d7367223ad32eae159383" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.093789 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8f782" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.094142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8f782" event={"ID":"c34a43bb-26f9-41bb-8d40-7cd30e71525d","Type":"ContainerDied","Data":"589c689f2cd0c738d2e2b7f074f4cc6fff2e384d7f5358b250e60f4656727912"} Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.100634 4869 generic.go:334] "Generic (PLEG): container finished" podID="0000345e-eabc-4888-acdb-00c809746e96" containerID="3a4cc8364b5164f25f0a96a2c5e5007ac3dbe97a7db78fdaa9fad0c2ebcc3ea0" exitCode=0 Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.100691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerDied","Data":"3a4cc8364b5164f25f0a96a2c5e5007ac3dbe97a7db78fdaa9fad0c2ebcc3ea0"} Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.117411 4869 scope.go:117] "RemoveContainer" containerID="d08c0a3edf0b9801695f5fdc1813d48952ba19117d6cdc212c92d4312afca0dd" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.157242 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.158731 4869 scope.go:117] "RemoveContainer" containerID="f84224d4a0640bc9c4cadf8e36472e8fe09028de333f0ae6e883f54ed753862a" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.166424 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfzjr"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.176598 4869 scope.go:117] "RemoveContainer" containerID="a68bf376f8e7f7d6b75cd627c98af48bb4788ebc8bc16727b742895c07295f5a" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.178919 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.191139 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8f782"] Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.241142 4869 scope.go:117] "RemoveContainer" containerID="493fbbd96bfaccf4949c8b7a44ce71d232914c8e951d07b67328cad53f9ffdaf" Feb 02 15:45:04 crc kubenswrapper[4869]: I0202 15:45:04.277335 4869 scope.go:117] "RemoveContainer" containerID="310c6f14696587aa249ead65052fe71a80bf5c91456e89be6fbb2af185a52ea5" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.522141 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" path="/var/lib/kubelet/pods/b731e8d9-da5b-464a-9ef0-7cf6311056d4/volumes" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.523905 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" path="/var/lib/kubelet/pods/c34a43bb-26f9-41bb-8d40-7cd30e71525d/volumes" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.682880 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.861896 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") pod \"0000345e-eabc-4888-acdb-00c809746e96\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.862102 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") pod \"0000345e-eabc-4888-acdb-00c809746e96\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.862257 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") pod \"0000345e-eabc-4888-acdb-00c809746e96\" (UID: \"0000345e-eabc-4888-acdb-00c809746e96\") " Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.863478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume" (OuterVolumeSpecName: "config-volume") pod "0000345e-eabc-4888-acdb-00c809746e96" (UID: "0000345e-eabc-4888-acdb-00c809746e96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.867355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt" (OuterVolumeSpecName: "kube-api-access-gqspt") pod "0000345e-eabc-4888-acdb-00c809746e96" (UID: "0000345e-eabc-4888-acdb-00c809746e96"). InnerVolumeSpecName "kube-api-access-gqspt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.867828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0000345e-eabc-4888-acdb-00c809746e96" (UID: "0000345e-eabc-4888-acdb-00c809746e96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.964996 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqspt\" (UniqueName: \"kubernetes.io/projected/0000345e-eabc-4888-acdb-00c809746e96-kube-api-access-gqspt\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.965034 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0000345e-eabc-4888-acdb-00c809746e96-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:05 crc kubenswrapper[4869]: I0202 15:45:05.965046 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0000345e-eabc-4888-acdb-00c809746e96-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.123438 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" event={"ID":"0000345e-eabc-4888-acdb-00c809746e96","Type":"ContainerDied","Data":"fe6079af63eb74c307e9b9ef6c867c7fbe4f9baf9bae3717acf0882ffd36e3bd"} Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.123536 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6079af63eb74c307e9b9ef6c867c7fbe4f9baf9bae3717acf0882ffd36e3bd" Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.123554 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500785-j4c7v" Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.165457 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:45:06 crc kubenswrapper[4869]: I0202 15:45:06.174010 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500740-nx2b6"] Feb 02 15:45:07 crc kubenswrapper[4869]: I0202 15:45:07.478508 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7b8e70-b003-44d3-92f8-f3537d98f42f" path="/var/lib/kubelet/pods/2f7b8e70-b003-44d3-92f8-f3537d98f42f/volumes" Feb 02 15:45:10 crc kubenswrapper[4869]: I0202 15:45:10.463126 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:10 crc kubenswrapper[4869]: E0202 15:45:10.464061 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:45:25 crc kubenswrapper[4869]: I0202 15:45:25.463001 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:25 crc kubenswrapper[4869]: E0202 15:45:25.464097 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:45:31 crc kubenswrapper[4869]: I0202 15:45:31.817344 4869 scope.go:117] "RemoveContainer" containerID="59bc9e2bf2a33d0613a4b3662bade576d4b886a4ed9586484e6fdba35d1e7e34" Feb 02 15:45:36 crc kubenswrapper[4869]: I0202 15:45:36.463348 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:36 crc kubenswrapper[4869]: E0202 15:45:36.464387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:45:48 crc kubenswrapper[4869]: I0202 15:45:48.462032 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:45:48 crc kubenswrapper[4869]: E0202 15:45:48.462831 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:01 crc kubenswrapper[4869]: I0202 15:46:01.462751 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:01 crc kubenswrapper[4869]: E0202 15:46:01.466437 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:16 crc kubenswrapper[4869]: I0202 15:46:16.463147 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:16 crc kubenswrapper[4869]: E0202 15:46:16.463896 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:28 crc kubenswrapper[4869]: I0202 15:46:28.462819 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:28 crc kubenswrapper[4869]: E0202 15:46:28.463866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:41 crc kubenswrapper[4869]: I0202 15:46:41.462474 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:41 crc kubenswrapper[4869]: E0202 15:46:41.463272 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:46:53 crc kubenswrapper[4869]: I0202 15:46:53.462519 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:46:53 crc kubenswrapper[4869]: E0202 15:46:53.463364 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:05 crc kubenswrapper[4869]: I0202 15:47:05.462568 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:05 crc kubenswrapper[4869]: E0202 15:47:05.463346 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:18 crc kubenswrapper[4869]: I0202 15:47:18.463432 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:18 crc kubenswrapper[4869]: E0202 15:47:18.464373 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:29 crc kubenswrapper[4869]: I0202 15:47:29.470239 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:29 crc kubenswrapper[4869]: E0202 15:47:29.471122 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:42 crc kubenswrapper[4869]: I0202 15:47:42.463881 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:42 crc kubenswrapper[4869]: E0202 15:47:42.480497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:47:54 crc kubenswrapper[4869]: I0202 15:47:54.463134 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:47:54 crc kubenswrapper[4869]: E0202 15:47:54.464112 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:09 crc kubenswrapper[4869]: I0202 15:48:09.468591 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:09 crc kubenswrapper[4869]: E0202 15:48:09.469382 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:24 crc kubenswrapper[4869]: I0202 15:48:24.463114 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:24 crc kubenswrapper[4869]: E0202 15:48:24.465034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:39 crc kubenswrapper[4869]: I0202 15:48:39.469008 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:39 crc kubenswrapper[4869]: E0202 15:48:39.470027 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:48:54 crc kubenswrapper[4869]: I0202 15:48:54.463366 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:48:54 crc kubenswrapper[4869]: E0202 15:48:54.464291 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:07 crc kubenswrapper[4869]: I0202 15:49:07.463139 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:07 crc kubenswrapper[4869]: E0202 15:49:07.464150 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:22 crc kubenswrapper[4869]: I0202 15:49:22.463046 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:22 crc kubenswrapper[4869]: E0202 15:49:22.465197 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:37 crc kubenswrapper[4869]: I0202 15:49:37.463194 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:37 crc kubenswrapper[4869]: E0202 15:49:37.464016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:49:50 crc kubenswrapper[4869]: I0202 15:49:50.463735 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:49:51 crc kubenswrapper[4869]: I0202 15:49:51.452002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4"} Feb 02 15:52:15 crc kubenswrapper[4869]: I0202 15:52:15.303902 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:52:15 crc kubenswrapper[4869]: I0202 15:52:15.304547 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:52:45 crc kubenswrapper[4869]: I0202 15:52:45.304371 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:52:45 crc kubenswrapper[4869]: I0202 15:52:45.304955 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.304015 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.304592 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.304638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.305432 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:53:15 crc kubenswrapper[4869]: I0202 15:53:15.305478 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4" gracePeriod=600 Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.313697 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4" exitCode=0 Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.313756 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4"} Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.314032 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715"} Feb 02 15:53:16 crc kubenswrapper[4869]: I0202 15:53:16.314055 4869 scope.go:117] "RemoveContainer" containerID="6b9eb85aa5e474641ed3edd5c5f50115ee7b87446d60c932dce6074a3c7a1290" Feb 02 15:55:15 crc kubenswrapper[4869]: I0202 15:55:15.303861 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:55:15 crc kubenswrapper[4869]: I0202 15:55:15.304427 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:55:45 crc kubenswrapper[4869]: I0202 15:55:45.304502 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:55:45 crc kubenswrapper[4869]: I0202 15:55:45.305165 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.414238 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415335 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415356 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415378 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415386 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415402 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415409 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415422 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415429 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415448 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415455 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415463 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0000345e-eabc-4888-acdb-00c809746e96" containerName="collect-profiles" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415470 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0000345e-eabc-4888-acdb-00c809746e96" containerName="collect-profiles" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415481 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415487 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415499 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415506 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="extract-utilities" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415521 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415530 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="extract-content" Feb 02 15:56:00 crc kubenswrapper[4869]: E0202 15:56:00.415547 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415555 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415797 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34a43bb-26f9-41bb-8d40-7cd30e71525d" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415811 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0000345e-eabc-4888-acdb-00c809746e96" containerName="collect-profiles" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415822 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b731e8d9-da5b-464a-9ef0-7cf6311056d4" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.415838 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ffd26c-f9c6-464b-bd85-24daabb4a361" containerName="registry-server" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.417456 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.436507 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.574854 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.575014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.575285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.676793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.676869 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.676953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.677398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.677440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:00 crc kubenswrapper[4869]: I0202 15:56:00.872057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"certified-operators-kzj25\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.040681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.519149 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.901143 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" exitCode=0 Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.901192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608"} Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.901226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerStarted","Data":"1d1ed7f54b361397932f6778687359fc99d59b970ece5722346717305c71da45"} Feb 02 15:56:01 crc kubenswrapper[4869]: I0202 15:56:01.904049 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 15:56:02 crc kubenswrapper[4869]: I0202 15:56:02.913776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerStarted","Data":"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2"} Feb 02 15:56:03 crc kubenswrapper[4869]: E0202 15:56:03.088101 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f1a097c_7ace_42fd_9cff_7361112e8226.slice/crio-conmon-5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2.scope\": RecentStats: unable to find data in memory cache]" Feb 02 15:56:03 crc kubenswrapper[4869]: I0202 15:56:03.927393 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" exitCode=0 Feb 02 15:56:03 crc kubenswrapper[4869]: I0202 15:56:03.927448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2"} Feb 02 15:56:04 crc kubenswrapper[4869]: I0202 15:56:04.938541 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerStarted","Data":"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8"} Feb 02 15:56:04 crc kubenswrapper[4869]: I0202 15:56:04.963604 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzj25" podStartSLOduration=2.394134332 podStartE2EDuration="4.963586091s" podCreationTimestamp="2026-02-02 15:56:00 +0000 UTC" firstStartedPulling="2026-02-02 15:56:01.903622131 +0000 UTC m=+4963.548258911" lastFinishedPulling="2026-02-02 15:56:04.47307391 +0000 UTC m=+4966.117710670" observedRunningTime="2026-02-02 15:56:04.959181375 +0000 UTC m=+4966.603818145" watchObservedRunningTime="2026-02-02 15:56:04.963586091 +0000 UTC m=+4966.608222851" Feb 02 15:56:11 crc kubenswrapper[4869]: I0202 15:56:11.042247 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:11 crc kubenswrapper[4869]: I0202 15:56:11.042759 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:11 crc kubenswrapper[4869]: I0202 15:56:11.111625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:12 crc kubenswrapper[4869]: I0202 15:56:12.215688 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:12 crc kubenswrapper[4869]: I0202 15:56:12.270646 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.019025 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kzj25" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" containerID="cri-o://aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" gracePeriod=2 Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.449711 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.500370 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") pod \"2f1a097c-7ace-42fd-9cff-7361112e8226\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.500474 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") pod \"2f1a097c-7ace-42fd-9cff-7361112e8226\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.500668 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") pod \"2f1a097c-7ace-42fd-9cff-7361112e8226\" (UID: \"2f1a097c-7ace-42fd-9cff-7361112e8226\") " Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.502550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities" (OuterVolumeSpecName: "utilities") pod "2f1a097c-7ace-42fd-9cff-7361112e8226" (UID: "2f1a097c-7ace-42fd-9cff-7361112e8226"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.509566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n" (OuterVolumeSpecName: "kube-api-access-qc46n") pod "2f1a097c-7ace-42fd-9cff-7361112e8226" (UID: "2f1a097c-7ace-42fd-9cff-7361112e8226"). InnerVolumeSpecName "kube-api-access-qc46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.554958 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f1a097c-7ace-42fd-9cff-7361112e8226" (UID: "2f1a097c-7ace-42fd-9cff-7361112e8226"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.603284 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.603337 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f1a097c-7ace-42fd-9cff-7361112e8226-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:14 crc kubenswrapper[4869]: I0202 15:56:14.603353 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc46n\" (UniqueName: \"kubernetes.io/projected/2f1a097c-7ace-42fd-9cff-7361112e8226-kube-api-access-qc46n\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030344 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" exitCode=0 Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8"} Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030421 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzj25" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzj25" event={"ID":"2f1a097c-7ace-42fd-9cff-7361112e8226","Type":"ContainerDied","Data":"1d1ed7f54b361397932f6778687359fc99d59b970ece5722346717305c71da45"} Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030475 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.030509 4869 scope.go:117] "RemoveContainer" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.031330 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031354 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.031365 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-content" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-content" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.031428 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-utilities" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031437 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="extract-utilities" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.031703 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" containerName="registry-server" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.033365 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.074658 4869 scope.go:117] "RemoveContainer" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.101962 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.112963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.113128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.113173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.126091 4869 scope.go:117] "RemoveContainer" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.170839 4869 scope.go:117] "RemoveContainer" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.182370 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8\": container with ID starting with aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8 not found: ID does not exist" containerID="aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.182424 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8"} err="failed to get container status \"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8\": rpc error: code = NotFound desc = could not find container \"aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8\": container with ID starting with aad9837c2b28155dd1705899deb532426463ad7bb6733475b64b11b1894764a8 not found: ID does not exist" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.182458 4869 scope.go:117] "RemoveContainer" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.183207 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2\": container with ID starting with 5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2 not found: ID does not exist" containerID="5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.183261 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2"} err="failed to get container status \"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2\": rpc error: code = NotFound desc = could not find container \"5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2\": container with ID starting with 5c4887147f1bf368d61336d03f33b3f2bd80fa9cd55414b63e1f24d70d868fd2 not found: ID does not exist" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.183290 4869 scope.go:117] "RemoveContainer" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.183665 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608\": container with ID starting with 6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608 not found: ID does not exist" containerID="6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.183688 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608"} err="failed to get container status \"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608\": rpc error: code = NotFound desc = could not find container \"6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608\": container with ID starting with 6291562d3ea4fff676f8da18f7da18566af9a7168981e7c01476bb6f6096b608 not found: ID does not exist" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.186196 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.197009 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kzj25"] Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.215854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.216033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.238096 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"community-operators-vpdrs\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.304791 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.304863 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.304945 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.305912 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.306054 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" gracePeriod=600 Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.409359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:15 crc kubenswrapper[4869]: E0202 15:56:15.440283 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.478027 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1a097c-7ace-42fd-9cff-7361112e8226" path="/var/lib/kubelet/pods/2f1a097c-7ace-42fd-9cff-7361112e8226/volumes" Feb 02 15:56:15 crc kubenswrapper[4869]: I0202 15:56:15.936812 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.041071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerStarted","Data":"3a7c65907adb73b71465ec45c8d0a735be7267b5d9f38d33359388e78eaded22"} Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.048181 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" exitCode=0 Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.048231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715"} Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.048266 4869 scope.go:117] "RemoveContainer" containerID="53adc6b193f6754229d3d341cd3fe05eec5ec29dc509615a63309d5df16787d4" Feb 02 15:56:16 crc kubenswrapper[4869]: I0202 15:56:16.049454 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:16 crc kubenswrapper[4869]: E0202 15:56:16.049988 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:17 crc kubenswrapper[4869]: I0202 15:56:17.058229 4869 generic.go:334] "Generic (PLEG): container finished" podID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" exitCode=0 Feb 02 15:56:17 crc kubenswrapper[4869]: I0202 15:56:17.058267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db"} Feb 02 15:56:18 crc kubenswrapper[4869]: I0202 15:56:18.071592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerStarted","Data":"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2"} Feb 02 15:56:19 crc kubenswrapper[4869]: I0202 15:56:19.081755 4869 generic.go:334] "Generic (PLEG): container finished" podID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" exitCode=0 Feb 02 15:56:19 crc kubenswrapper[4869]: I0202 15:56:19.081815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2"} Feb 02 15:56:20 crc kubenswrapper[4869]: I0202 15:56:20.091793 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerStarted","Data":"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf"} Feb 02 15:56:20 crc kubenswrapper[4869]: I0202 15:56:20.123050 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpdrs" podStartSLOduration=2.667870964 podStartE2EDuration="5.123017452s" podCreationTimestamp="2026-02-02 15:56:15 +0000 UTC" firstStartedPulling="2026-02-02 15:56:17.059876467 +0000 UTC m=+4978.704513237" lastFinishedPulling="2026-02-02 15:56:19.515022955 +0000 UTC m=+4981.159659725" observedRunningTime="2026-02-02 15:56:20.116850214 +0000 UTC m=+4981.761486984" watchObservedRunningTime="2026-02-02 15:56:20.123017452 +0000 UTC m=+4981.767654222" Feb 02 15:56:25 crc kubenswrapper[4869]: I0202 15:56:25.410241 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:25 crc kubenswrapper[4869]: I0202 15:56:25.410795 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:25 crc kubenswrapper[4869]: I0202 15:56:25.484799 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:26 crc kubenswrapper[4869]: I0202 15:56:26.218989 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:26 crc kubenswrapper[4869]: I0202 15:56:26.268336 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:27 crc kubenswrapper[4869]: I0202 15:56:27.463860 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:27 crc kubenswrapper[4869]: E0202 15:56:27.464508 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.179433 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpdrs" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" containerID="cri-o://7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" gracePeriod=2 Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.640029 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.792590 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") pod \"c818aa24-fa5f-4240-9b0b-66d16f60329e\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.792673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") pod \"c818aa24-fa5f-4240-9b0b-66d16f60329e\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.792843 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") pod \"c818aa24-fa5f-4240-9b0b-66d16f60329e\" (UID: \"c818aa24-fa5f-4240-9b0b-66d16f60329e\") " Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.794190 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities" (OuterVolumeSpecName: "utilities") pod "c818aa24-fa5f-4240-9b0b-66d16f60329e" (UID: "c818aa24-fa5f-4240-9b0b-66d16f60329e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.806218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn" (OuterVolumeSpecName: "kube-api-access-29dvn") pod "c818aa24-fa5f-4240-9b0b-66d16f60329e" (UID: "c818aa24-fa5f-4240-9b0b-66d16f60329e"). InnerVolumeSpecName "kube-api-access-29dvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.857260 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c818aa24-fa5f-4240-9b0b-66d16f60329e" (UID: "c818aa24-fa5f-4240-9b0b-66d16f60329e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.895450 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.895489 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29dvn\" (UniqueName: \"kubernetes.io/projected/c818aa24-fa5f-4240-9b0b-66d16f60329e-kube-api-access-29dvn\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:28 crc kubenswrapper[4869]: I0202 15:56:28.895501 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c818aa24-fa5f-4240-9b0b-66d16f60329e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200383 4869 generic.go:334] "Generic (PLEG): container finished" podID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" exitCode=0 Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200444 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf"} Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200459 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpdrs" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200484 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpdrs" event={"ID":"c818aa24-fa5f-4240-9b0b-66d16f60329e","Type":"ContainerDied","Data":"3a7c65907adb73b71465ec45c8d0a735be7267b5d9f38d33359388e78eaded22"} Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.200512 4869 scope.go:117] "RemoveContainer" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.232118 4869 scope.go:117] "RemoveContainer" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.238528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.247954 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpdrs"] Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.263189 4869 scope.go:117] "RemoveContainer" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.304025 4869 scope.go:117] "RemoveContainer" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" Feb 02 15:56:29 crc kubenswrapper[4869]: E0202 15:56:29.304447 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf\": container with ID starting with 7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf not found: ID does not exist" containerID="7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.304488 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf"} err="failed to get container status \"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf\": rpc error: code = NotFound desc = could not find container \"7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf\": container with ID starting with 7451ed9ad6653c475b0955da8ce8791105de128b04a079e5a95e1ceecf960ecf not found: ID does not exist" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.304513 4869 scope.go:117] "RemoveContainer" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" Feb 02 15:56:29 crc kubenswrapper[4869]: E0202 15:56:29.304978 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2\": container with ID starting with 4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2 not found: ID does not exist" containerID="4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.305004 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2"} err="failed to get container status \"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2\": rpc error: code = NotFound desc = could not find container \"4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2\": container with ID starting with 4ffc332f001a461022127c3b981f5b11f58f0c82b68d060cdabd44fd2c8b14a2 not found: ID does not exist" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.305020 4869 scope.go:117] "RemoveContainer" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" Feb 02 15:56:29 crc kubenswrapper[4869]: E0202 15:56:29.305285 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db\": container with ID starting with 9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db not found: ID does not exist" containerID="9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.305311 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db"} err="failed to get container status \"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db\": rpc error: code = NotFound desc = could not find container \"9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db\": container with ID starting with 9ed3d960d6386733f37c0d27883650b7d3aa9cac20f1b675b2d54c33eb3962db not found: ID does not exist" Feb 02 15:56:29 crc kubenswrapper[4869]: I0202 15:56:29.474738 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" path="/var/lib/kubelet/pods/c818aa24-fa5f-4240-9b0b-66d16f60329e/volumes" Feb 02 15:56:38 crc kubenswrapper[4869]: I0202 15:56:38.462502 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:38 crc kubenswrapper[4869]: E0202 15:56:38.463344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:56:49 crc kubenswrapper[4869]: I0202 15:56:49.469553 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:56:49 crc kubenswrapper[4869]: E0202 15:56:49.470579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:00 crc kubenswrapper[4869]: I0202 15:57:00.462864 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:00 crc kubenswrapper[4869]: E0202 15:57:00.464050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:13 crc kubenswrapper[4869]: I0202 15:57:13.463035 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:13 crc kubenswrapper[4869]: E0202 15:57:13.463705 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:25 crc kubenswrapper[4869]: I0202 15:57:25.463529 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:25 crc kubenswrapper[4869]: E0202 15:57:25.464506 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:39 crc kubenswrapper[4869]: I0202 15:57:39.476777 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:39 crc kubenswrapper[4869]: E0202 15:57:39.477837 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:57:53 crc kubenswrapper[4869]: I0202 15:57:53.463218 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:57:53 crc kubenswrapper[4869]: E0202 15:57:53.463994 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:07 crc kubenswrapper[4869]: I0202 15:58:07.463134 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:07 crc kubenswrapper[4869]: E0202 15:58:07.475808 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:21 crc kubenswrapper[4869]: I0202 15:58:21.463653 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:21 crc kubenswrapper[4869]: E0202 15:58:21.465171 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:35 crc kubenswrapper[4869]: I0202 15:58:35.463560 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:35 crc kubenswrapper[4869]: E0202 15:58:35.464540 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:58:47 crc kubenswrapper[4869]: I0202 15:58:47.462207 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:58:47 crc kubenswrapper[4869]: E0202 15:58:47.463050 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:00 crc kubenswrapper[4869]: I0202 15:59:00.463774 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:00 crc kubenswrapper[4869]: E0202 15:59:00.465111 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:15 crc kubenswrapper[4869]: I0202 15:59:15.463228 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:15 crc kubenswrapper[4869]: E0202 15:59:15.463901 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.272966 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:24 crc kubenswrapper[4869]: E0202 15:59:24.274314 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-utilities" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274339 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-utilities" Feb 02 15:59:24 crc kubenswrapper[4869]: E0202 15:59:24.274389 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-content" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274401 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="extract-content" Feb 02 15:59:24 crc kubenswrapper[4869]: E0202 15:59:24.274433 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274446 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.274825 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c818aa24-fa5f-4240-9b0b-66d16f60329e" containerName="registry-server" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.277402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.281842 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.417227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.417356 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.417471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.519501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.519606 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.519684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.520135 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.520606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.553020 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"redhat-operators-j2vgn\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:24 crc kubenswrapper[4869]: I0202 15:59:24.607645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.110375 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.900111 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" exitCode=0 Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.900174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166"} Feb 02 15:59:25 crc kubenswrapper[4869]: I0202 15:59:25.900205 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerStarted","Data":"6baa853afe0f90fe9a7256d9639c7a83d812486db1067c2f6feceebd747b7a24"} Feb 02 15:59:27 crc kubenswrapper[4869]: I0202 15:59:27.918374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerStarted","Data":"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed"} Feb 02 15:59:28 crc kubenswrapper[4869]: I0202 15:59:28.463852 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:28 crc kubenswrapper[4869]: E0202 15:59:28.464357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:30 crc kubenswrapper[4869]: I0202 15:59:30.949503 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" exitCode=0 Feb 02 15:59:30 crc kubenswrapper[4869]: I0202 15:59:30.949591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed"} Feb 02 15:59:31 crc kubenswrapper[4869]: I0202 15:59:31.963665 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerStarted","Data":"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155"} Feb 02 15:59:32 crc kubenswrapper[4869]: I0202 15:59:32.001150 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2vgn" podStartSLOduration=2.532055345 podStartE2EDuration="8.001124226s" podCreationTimestamp="2026-02-02 15:59:24 +0000 UTC" firstStartedPulling="2026-02-02 15:59:25.902421137 +0000 UTC m=+5167.547057907" lastFinishedPulling="2026-02-02 15:59:31.371490008 +0000 UTC m=+5173.016126788" observedRunningTime="2026-02-02 15:59:31.989333942 +0000 UTC m=+5173.633970712" watchObservedRunningTime="2026-02-02 15:59:32.001124226 +0000 UTC m=+5173.645761046" Feb 02 15:59:34 crc kubenswrapper[4869]: I0202 15:59:34.607860 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:34 crc kubenswrapper[4869]: I0202 15:59:34.609051 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:35 crc kubenswrapper[4869]: I0202 15:59:35.664508 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2vgn" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" probeResult="failure" output=< Feb 02 15:59:35 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 15:59:35 crc kubenswrapper[4869]: > Feb 02 15:59:40 crc kubenswrapper[4869]: I0202 15:59:40.464021 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:40 crc kubenswrapper[4869]: E0202 15:59:40.465131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 15:59:44 crc kubenswrapper[4869]: I0202 15:59:44.710143 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:44 crc kubenswrapper[4869]: I0202 15:59:44.769312 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:44 crc kubenswrapper[4869]: I0202 15:59:44.946483 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.107415 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2vgn" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" containerID="cri-o://d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" gracePeriod=2 Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.642939 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.820204 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") pod \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.820316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") pod \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.820338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") pod \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\" (UID: \"3cabbeee-42cb-4803-a4fd-e0cf4845d192\") " Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.821685 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities" (OuterVolumeSpecName: "utilities") pod "3cabbeee-42cb-4803-a4fd-e0cf4845d192" (UID: "3cabbeee-42cb-4803-a4fd-e0cf4845d192"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.829135 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt" (OuterVolumeSpecName: "kube-api-access-d9vzt") pod "3cabbeee-42cb-4803-a4fd-e0cf4845d192" (UID: "3cabbeee-42cb-4803-a4fd-e0cf4845d192"). InnerVolumeSpecName "kube-api-access-d9vzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.923528 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9vzt\" (UniqueName: \"kubernetes.io/projected/3cabbeee-42cb-4803-a4fd-e0cf4845d192-kube-api-access-d9vzt\") on node \"crc\" DevicePath \"\"" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.923583 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 15:59:46 crc kubenswrapper[4869]: I0202 15:59:46.937759 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cabbeee-42cb-4803-a4fd-e0cf4845d192" (UID: "3cabbeee-42cb-4803-a4fd-e0cf4845d192"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.025723 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cabbeee-42cb-4803-a4fd-e0cf4845d192-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122836 4869 generic.go:334] "Generic (PLEG): container finished" podID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" exitCode=0 Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155"} Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122924 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2vgn" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2vgn" event={"ID":"3cabbeee-42cb-4803-a4fd-e0cf4845d192","Type":"ContainerDied","Data":"6baa853afe0f90fe9a7256d9639c7a83d812486db1067c2f6feceebd747b7a24"} Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.122979 4869 scope.go:117] "RemoveContainer" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.168041 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.168228 4869 scope.go:117] "RemoveContainer" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.173414 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2vgn"] Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.206975 4869 scope.go:117] "RemoveContainer" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.268798 4869 scope.go:117] "RemoveContainer" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" Feb 02 15:59:47 crc kubenswrapper[4869]: E0202 15:59:47.269340 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155\": container with ID starting with d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155 not found: ID does not exist" containerID="d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269383 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155"} err="failed to get container status \"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155\": rpc error: code = NotFound desc = could not find container \"d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155\": container with ID starting with d92ad1f160e85a54bc6a8da03dd28905182bcbd28f1ddc2c11178c3e7ddef155 not found: ID does not exist" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269409 4869 scope.go:117] "RemoveContainer" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" Feb 02 15:59:47 crc kubenswrapper[4869]: E0202 15:59:47.269660 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed\": container with ID starting with afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed not found: ID does not exist" containerID="afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269704 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed"} err="failed to get container status \"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed\": rpc error: code = NotFound desc = could not find container \"afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed\": container with ID starting with afc94f1cd75256c125a34f72586492fe0960e765c67cf3c7baee3e7308c6aaed not found: ID does not exist" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.269717 4869 scope.go:117] "RemoveContainer" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" Feb 02 15:59:47 crc kubenswrapper[4869]: E0202 15:59:47.270038 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166\": container with ID starting with 8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166 not found: ID does not exist" containerID="8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.270059 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166"} err="failed to get container status \"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166\": rpc error: code = NotFound desc = could not find container \"8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166\": container with ID starting with 8cbc0098192bd11541b887b2ef5449fe414ae248ea393f2ba734776e6cea3166 not found: ID does not exist" Feb 02 15:59:47 crc kubenswrapper[4869]: I0202 15:59:47.473389 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" path="/var/lib/kubelet/pods/3cabbeee-42cb-4803-a4fd-e0cf4845d192/volumes" Feb 02 15:59:54 crc kubenswrapper[4869]: I0202 15:59:54.463577 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 15:59:54 crc kubenswrapper[4869]: E0202 15:59:54.464293 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.162678 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs"] Feb 02 16:00:00 crc kubenswrapper[4869]: E0202 16:00:00.163702 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.163718 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" Feb 02 16:00:00 crc kubenswrapper[4869]: E0202 16:00:00.163741 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-content" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.163749 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-content" Feb 02 16:00:00 crc kubenswrapper[4869]: E0202 16:00:00.163763 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-utilities" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.163772 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="extract-utilities" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.164086 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cabbeee-42cb-4803-a4fd-e0cf4845d192" containerName="registry-server" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.164840 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.167624 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.169716 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.190425 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs"] Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.226779 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.226919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.226947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.328721 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.328764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.328888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.329880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.334870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.355144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"collect-profiles-29500800-mc6rs\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.492223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:00 crc kubenswrapper[4869]: I0202 16:00:00.992526 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs"] Feb 02 16:00:01 crc kubenswrapper[4869]: I0202 16:00:01.284317 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerStarted","Data":"46292c3441f018ca4dd7c614a9eecb2a4574facde6b3dd81d20d43ae16aca676"} Feb 02 16:00:01 crc kubenswrapper[4869]: I0202 16:00:01.284369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerStarted","Data":"039837de22242761a89dbabfc668e7ba6a60ec68f859298d86cbad8eee7e0fa2"} Feb 02 16:00:01 crc kubenswrapper[4869]: I0202 16:00:01.309966 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" podStartSLOduration=1.309948258 podStartE2EDuration="1.309948258s" podCreationTimestamp="2026-02-02 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 16:00:01.306540767 +0000 UTC m=+5202.951177537" watchObservedRunningTime="2026-02-02 16:00:01.309948258 +0000 UTC m=+5202.954585028" Feb 02 16:00:02 crc kubenswrapper[4869]: I0202 16:00:02.293794 4869 generic.go:334] "Generic (PLEG): container finished" podID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerID="46292c3441f018ca4dd7c614a9eecb2a4574facde6b3dd81d20d43ae16aca676" exitCode=0 Feb 02 16:00:02 crc kubenswrapper[4869]: I0202 16:00:02.293849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerDied","Data":"46292c3441f018ca4dd7c614a9eecb2a4574facde6b3dd81d20d43ae16aca676"} Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.693540 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.805311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") pod \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.805476 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") pod \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.805504 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") pod \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\" (UID: \"2a96ca5f-1cc6-4490-9db4-56f297abcbcf\") " Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.806629 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a96ca5f-1cc6-4490-9db4-56f297abcbcf" (UID: "2a96ca5f-1cc6-4490-9db4-56f297abcbcf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.811125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s" (OuterVolumeSpecName: "kube-api-access-2tt8s") pod "2a96ca5f-1cc6-4490-9db4-56f297abcbcf" (UID: "2a96ca5f-1cc6-4490-9db4-56f297abcbcf"). InnerVolumeSpecName "kube-api-access-2tt8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.811636 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a96ca5f-1cc6-4490-9db4-56f297abcbcf" (UID: "2a96ca5f-1cc6-4490-9db4-56f297abcbcf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.908294 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.908338 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tt8s\" (UniqueName: \"kubernetes.io/projected/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-kube-api-access-2tt8s\") on node \"crc\" DevicePath \"\"" Feb 02 16:00:03 crc kubenswrapper[4869]: I0202 16:00:03.908348 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a96ca5f-1cc6-4490-9db4-56f297abcbcf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.318436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" event={"ID":"2a96ca5f-1cc6-4490-9db4-56f297abcbcf","Type":"ContainerDied","Data":"039837de22242761a89dbabfc668e7ba6a60ec68f859298d86cbad8eee7e0fa2"} Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.318494 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="039837de22242761a89dbabfc668e7ba6a60ec68f859298d86cbad8eee7e0fa2" Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.318524 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500800-mc6rs" Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.786207 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 16:00:04 crc kubenswrapper[4869]: I0202 16:00:04.795340 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500755-xwwrj"] Feb 02 16:00:05 crc kubenswrapper[4869]: I0202 16:00:05.463662 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:05 crc kubenswrapper[4869]: E0202 16:00:05.464699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:05 crc kubenswrapper[4869]: I0202 16:00:05.484576 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d86c4a4-a435-4f57-9566-eaa1e74d1f5c" path="/var/lib/kubelet/pods/8d86c4a4-a435-4f57-9566-eaa1e74d1f5c/volumes" Feb 02 16:00:19 crc kubenswrapper[4869]: I0202 16:00:19.495996 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:19 crc kubenswrapper[4869]: E0202 16:00:19.497164 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:32 crc kubenswrapper[4869]: I0202 16:00:32.250220 4869 scope.go:117] "RemoveContainer" containerID="1ee657e7e391fb0be0a60133a3c2bc04a0767f387cf6cc279ee259f05131226f" Feb 02 16:00:32 crc kubenswrapper[4869]: I0202 16:00:32.462950 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:32 crc kubenswrapper[4869]: E0202 16:00:32.463237 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:45 crc kubenswrapper[4869]: I0202 16:00:45.463623 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:45 crc kubenswrapper[4869]: E0202 16:00:45.464452 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:59 crc kubenswrapper[4869]: I0202 16:00:59.477763 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:00:59 crc kubenswrapper[4869]: E0202 16:00:59.478656 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:00:59 crc kubenswrapper[4869]: I0202 16:00:59.831086 4869 generic.go:334] "Generic (PLEG): container finished" podID="1ccbb21f-23d9-48be-a212-547e064326f6" containerID="ac9a60d8c10f53a0410a3a801abad85986e73c2832d375d41caefea008863171" exitCode=1 Feb 02 16:00:59 crc kubenswrapper[4869]: I0202 16:00:59.831130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerDied","Data":"ac9a60d8c10f53a0410a3a801abad85986e73c2832d375d41caefea008863171"} Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.165416 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500801-n7swm"] Feb 02 16:01:00 crc kubenswrapper[4869]: E0202 16:01:00.166129 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerName="collect-profiles" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.166141 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerName="collect-profiles" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.166363 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a96ca5f-1cc6-4490-9db4-56f297abcbcf" containerName="collect-profiles" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.167033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.173114 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500801-n7swm"] Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290383 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290519 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.290642 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.392952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.393025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.393135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.393224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.773089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.773131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.773550 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.784070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"keystone-cron-29500801-n7swm\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:00 crc kubenswrapper[4869]: I0202 16:01:00.844444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.307288 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.389653 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500801-n7swm"] Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414517 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414577 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414684 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414723 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414748 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414831 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.414989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") pod \"1ccbb21f-23d9-48be-a212-547e064326f6\" (UID: \"1ccbb21f-23d9-48be-a212-547e064326f6\") " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.415396 4869 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.416095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data" (OuterVolumeSpecName: "config-data") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.419046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.419207 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj" (OuterVolumeSpecName: "kube-api-access-zh7qj") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "kube-api-access-zh7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.421267 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.446199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.448559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.459405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.483353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1ccbb21f-23d9-48be-a212-547e064326f6" (UID: "1ccbb21f-23d9-48be-a212-547e064326f6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517817 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517887 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517903 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh7qj\" (UniqueName: \"kubernetes.io/projected/1ccbb21f-23d9-48be-a212-547e064326f6-kube-api-access-zh7qj\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517939 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517954 4869 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1ccbb21f-23d9-48be-a212-547e064326f6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517967 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517981 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.517993 4869 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1ccbb21f-23d9-48be-a212-547e064326f6-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.548632 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.620228 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.852028 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1ccbb21f-23d9-48be-a212-547e064326f6","Type":"ContainerDied","Data":"c08d2dd97b8a58de7b4399802e9fdd669c46ddb7f1d0f2a64a4f17afc41bb15d"} Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.852078 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08d2dd97b8a58de7b4399802e9fdd669c46ddb7f1d0f2a64a4f17afc41bb15d" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.852143 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.861482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerStarted","Data":"4c40d44f0adae652ed9d418c3153ac6f7654d77d457608da8a24a0570aeaf2b9"} Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.861526 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerStarted","Data":"1cefb99e5f64ca437ace858bea1e79a6cd5a8188aa2807064555af4b66cedc09"} Feb 02 16:01:01 crc kubenswrapper[4869]: I0202 16:01:01.886098 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500801-n7swm" podStartSLOduration=1.886079536 podStartE2EDuration="1.886079536s" podCreationTimestamp="2026-02-02 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 16:01:01.879680201 +0000 UTC m=+5263.524316991" watchObservedRunningTime="2026-02-02 16:01:01.886079536 +0000 UTC m=+5263.530716306" Feb 02 16:01:04 crc kubenswrapper[4869]: I0202 16:01:04.887083 4869 generic.go:334] "Generic (PLEG): container finished" podID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerID="4c40d44f0adae652ed9d418c3153ac6f7654d77d457608da8a24a0570aeaf2b9" exitCode=0 Feb 02 16:01:04 crc kubenswrapper[4869]: I0202 16:01:04.887152 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerDied","Data":"4c40d44f0adae652ed9d418c3153ac6f7654d77d457608da8a24a0570aeaf2b9"} Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.256714 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.316978 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.317111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.317155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.317275 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") pod \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\" (UID: \"35e8f12b-8b8b-4309-a57e-e46c357acc6d\") " Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.332189 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.332442 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4" (OuterVolumeSpecName: "kube-api-access-xf4x4") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "kube-api-access-xf4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.346000 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.384853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data" (OuterVolumeSpecName: "config-data") pod "35e8f12b-8b8b-4309-a57e-e46c357acc6d" (UID: "35e8f12b-8b8b-4309-a57e-e46c357acc6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420118 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4x4\" (UniqueName: \"kubernetes.io/projected/35e8f12b-8b8b-4309-a57e-e46c357acc6d-kube-api-access-xf4x4\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420158 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420168 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.420176 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35e8f12b-8b8b-4309-a57e-e46c357acc6d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.905012 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500801-n7swm" event={"ID":"35e8f12b-8b8b-4309-a57e-e46c357acc6d","Type":"ContainerDied","Data":"1cefb99e5f64ca437ace858bea1e79a6cd5a8188aa2807064555af4b66cedc09"} Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.905047 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cefb99e5f64ca437ace858bea1e79a6cd5a8188aa2807064555af4b66cedc09" Feb 02 16:01:06 crc kubenswrapper[4869]: I0202 16:01:06.905072 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500801-n7swm" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.972149 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 16:01:10 crc kubenswrapper[4869]: E0202 16:01:10.973250 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" containerName="tempest-tests-tempest-tests-runner" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973268 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" containerName="tempest-tests-tempest-tests-runner" Feb 02 16:01:10 crc kubenswrapper[4869]: E0202 16:01:10.973290 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerName="keystone-cron" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973298 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerName="keystone-cron" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973574 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e8f12b-8b8b-4309-a57e-e46c357acc6d" containerName="keystone-cron" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.973598 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccbb21f-23d9-48be-a212-547e064326f6" containerName="tempest-tests-tempest-tests-runner" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.974380 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.978332 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-72k4z" Feb 02 16:01:10 crc kubenswrapper[4869]: I0202 16:01:10.987604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.038344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.038421 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmw8\" (UniqueName: \"kubernetes.io/projected/6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d-kube-api-access-7bmw8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.140041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.140163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmw8\" (UniqueName: \"kubernetes.io/projected/6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d-kube-api-access-7bmw8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.140723 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.158573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmw8\" (UniqueName: \"kubernetes.io/projected/6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d-kube-api-access-7bmw8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.164279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.236495 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.243650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.278185 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.301394 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.345470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.345524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.345549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448108 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.448923 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.466315 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:01:11 crc kubenswrapper[4869]: E0202 16:01:11.467029 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.486724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"redhat-marketplace-nb88j\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.581883 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.747244 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.759540 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 16:01:11 crc kubenswrapper[4869]: I0202 16:01:11.957362 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d","Type":"ContainerStarted","Data":"192ffe3191ab6cc78dc87919064441ec6c892ea35f60414236288b735b2f6893"} Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.050993 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.969414 4869 generic.go:334] "Generic (PLEG): container finished" podID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" exitCode=0 Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.969452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1"} Feb 02 16:01:12 crc kubenswrapper[4869]: I0202 16:01:12.970039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerStarted","Data":"8c561c68a8d80261d3ae57c5116c0d78271a1ec102819936dbd21831ba6c58c2"} Feb 02 16:01:13 crc kubenswrapper[4869]: I0202 16:01:13.980076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerStarted","Data":"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3"} Feb 02 16:01:13 crc kubenswrapper[4869]: I0202 16:01:13.981762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d","Type":"ContainerStarted","Data":"571ffcdf208ee41bf7942053a1cb2d0aa05f16787f1a599db9876ed5d2b2f4ce"} Feb 02 16:01:14 crc kubenswrapper[4869]: I0202 16:01:14.016251 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.9207120399999997 podStartE2EDuration="4.016230358s" podCreationTimestamp="2026-02-02 16:01:10 +0000 UTC" firstStartedPulling="2026-02-02 16:01:11.759322295 +0000 UTC m=+5273.403959065" lastFinishedPulling="2026-02-02 16:01:12.854840613 +0000 UTC m=+5274.499477383" observedRunningTime="2026-02-02 16:01:14.010198643 +0000 UTC m=+5275.654835413" watchObservedRunningTime="2026-02-02 16:01:14.016230358 +0000 UTC m=+5275.660867128" Feb 02 16:01:14 crc kubenswrapper[4869]: I0202 16:01:14.993250 4869 generic.go:334] "Generic (PLEG): container finished" podID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" exitCode=0 Feb 02 16:01:14 crc kubenswrapper[4869]: I0202 16:01:14.993305 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3"} Feb 02 16:01:16 crc kubenswrapper[4869]: I0202 16:01:16.003131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerStarted","Data":"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6"} Feb 02 16:01:16 crc kubenswrapper[4869]: I0202 16:01:16.033722 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nb88j" podStartSLOduration=2.6043043409999997 podStartE2EDuration="5.033697565s" podCreationTimestamp="2026-02-02 16:01:11 +0000 UTC" firstStartedPulling="2026-02-02 16:01:12.970853331 +0000 UTC m=+5274.615490101" lastFinishedPulling="2026-02-02 16:01:15.400246555 +0000 UTC m=+5277.044883325" observedRunningTime="2026-02-02 16:01:16.027391703 +0000 UTC m=+5277.672028473" watchObservedRunningTime="2026-02-02 16:01:16.033697565 +0000 UTC m=+5277.678334335" Feb 02 16:01:21 crc kubenswrapper[4869]: I0202 16:01:21.582455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:21 crc kubenswrapper[4869]: I0202 16:01:21.584113 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:21 crc kubenswrapper[4869]: I0202 16:01:21.626351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:22 crc kubenswrapper[4869]: I0202 16:01:22.142444 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:22 crc kubenswrapper[4869]: I0202 16:01:22.184849 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:22 crc kubenswrapper[4869]: I0202 16:01:22.462807 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:01:23 crc kubenswrapper[4869]: I0202 16:01:23.067179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89"} Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.077656 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nb88j" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" containerID="cri-o://8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" gracePeriod=2 Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.505194 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.621180 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") pod \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.621249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") pod \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.621316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") pod \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\" (UID: \"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49\") " Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.623420 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities" (OuterVolumeSpecName: "utilities") pod "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" (UID: "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.632325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn" (OuterVolumeSpecName: "kube-api-access-x7qdn") pod "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" (UID: "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49"). InnerVolumeSpecName "kube-api-access-x7qdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.652513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" (UID: "40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.723721 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7qdn\" (UniqueName: \"kubernetes.io/projected/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-kube-api-access-x7qdn\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.724072 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:24 crc kubenswrapper[4869]: I0202 16:01:24.724084 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088183 4869 generic.go:334] "Generic (PLEG): container finished" podID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" exitCode=0 Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088238 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6"} Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088270 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb88j" event={"ID":"40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49","Type":"ContainerDied","Data":"8c561c68a8d80261d3ae57c5116c0d78271a1ec102819936dbd21831ba6c58c2"} Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088292 4869 scope.go:117] "RemoveContainer" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.088449 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb88j" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.116862 4869 scope.go:117] "RemoveContainer" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.137079 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.145697 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb88j"] Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.170442 4869 scope.go:117] "RemoveContainer" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.205435 4869 scope.go:117] "RemoveContainer" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" Feb 02 16:01:25 crc kubenswrapper[4869]: E0202 16:01:25.205958 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6\": container with ID starting with 8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6 not found: ID does not exist" containerID="8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206012 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6"} err="failed to get container status \"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6\": rpc error: code = NotFound desc = could not find container \"8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6\": container with ID starting with 8c3a57064a9508ea9a9ff893bb8bde169aa926f29f291ea97d5728e21b1270b6 not found: ID does not exist" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206046 4869 scope.go:117] "RemoveContainer" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" Feb 02 16:01:25 crc kubenswrapper[4869]: E0202 16:01:25.206721 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3\": container with ID starting with ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3 not found: ID does not exist" containerID="ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206764 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3"} err="failed to get container status \"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3\": rpc error: code = NotFound desc = could not find container \"ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3\": container with ID starting with ad0e62ebdf6342f8b5844490aa74ab792e24500103e152524edbcb7c30d751e3 not found: ID does not exist" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.206790 4869 scope.go:117] "RemoveContainer" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" Feb 02 16:01:25 crc kubenswrapper[4869]: E0202 16:01:25.207127 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1\": container with ID starting with e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1 not found: ID does not exist" containerID="e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.207156 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1"} err="failed to get container status \"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1\": rpc error: code = NotFound desc = could not find container \"e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1\": container with ID starting with e7154ae6edfc63bfaf2c14f6ef426ceac87310ff8d176618a7a5b64816f8baf1 not found: ID does not exist" Feb 02 16:01:25 crc kubenswrapper[4869]: I0202 16:01:25.478640 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" path="/var/lib/kubelet/pods/40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49/volumes" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.488934 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:01:53 crc kubenswrapper[4869]: E0202 16:01:53.489862 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-content" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.489877 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-content" Feb 02 16:01:53 crc kubenswrapper[4869]: E0202 16:01:53.489897 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-utilities" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.489921 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="extract-utilities" Feb 02 16:01:53 crc kubenswrapper[4869]: E0202 16:01:53.489943 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.489950 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.490358 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b4f7cd-73c1-4c72-9c85-cc1ba3acbf49" containerName="registry-server" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.491290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.494064 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9szhh"/"default-dockercfg-kj6xn" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.499710 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9szhh"/"openshift-service-ca.crt" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.500176 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9szhh"/"kube-root-ca.crt" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.509188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.546683 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.547004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.648732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.648843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.649444 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.683143 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"must-gather-wq69k\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:53 crc kubenswrapper[4869]: I0202 16:01:53.814815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:01:54 crc kubenswrapper[4869]: I0202 16:01:54.277733 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:01:54 crc kubenswrapper[4869]: I0202 16:01:54.372953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerStarted","Data":"ab7c4fdd48474e2f60641d8627c5c42465d5c53003bf3a4e726e765ab0daab84"} Feb 02 16:02:00 crc kubenswrapper[4869]: I0202 16:02:00.437934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerStarted","Data":"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1"} Feb 02 16:02:00 crc kubenswrapper[4869]: I0202 16:02:00.438447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerStarted","Data":"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2"} Feb 02 16:02:00 crc kubenswrapper[4869]: I0202 16:02:00.461744 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9szhh/must-gather-wq69k" podStartSLOduration=2.561421871 podStartE2EDuration="7.46172156s" podCreationTimestamp="2026-02-02 16:01:53 +0000 UTC" firstStartedPulling="2026-02-02 16:01:54.285556044 +0000 UTC m=+5315.930192824" lastFinishedPulling="2026-02-02 16:01:59.185855743 +0000 UTC m=+5320.830492513" observedRunningTime="2026-02-02 16:02:00.453893922 +0000 UTC m=+5322.098530782" watchObservedRunningTime="2026-02-02 16:02:00.46172156 +0000 UTC m=+5322.106358330" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.112484 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/crc-debug-6r9jq"] Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.114747 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.216830 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.216935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.319342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.319445 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.319508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.338533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"crc-debug-6r9jq\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.436309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:02:05 crc kubenswrapper[4869]: I0202 16:02:05.498827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" event={"ID":"4883a162-0123-4994-b91f-680ccb87e785","Type":"ContainerStarted","Data":"da9a1ba8d0e61d04f903c2e3c8eceb258c48f2a092d0744222de7809359d62f8"} Feb 02 16:02:17 crc kubenswrapper[4869]: I0202 16:02:17.608999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" event={"ID":"4883a162-0123-4994-b91f-680ccb87e785","Type":"ContainerStarted","Data":"d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f"} Feb 02 16:02:17 crc kubenswrapper[4869]: I0202 16:02:17.632704 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" podStartSLOduration=1.670547413 podStartE2EDuration="12.63267934s" podCreationTimestamp="2026-02-02 16:02:05 +0000 UTC" firstStartedPulling="2026-02-02 16:02:05.467518813 +0000 UTC m=+5327.112155573" lastFinishedPulling="2026-02-02 16:02:16.42965073 +0000 UTC m=+5338.074287500" observedRunningTime="2026-02-02 16:02:17.6194296 +0000 UTC m=+5339.264066370" watchObservedRunningTime="2026-02-02 16:02:17.63267934 +0000 UTC m=+5339.277316110" Feb 02 16:03:05 crc kubenswrapper[4869]: I0202 16:03:05.047551 4869 generic.go:334] "Generic (PLEG): container finished" podID="4883a162-0123-4994-b91f-680ccb87e785" containerID="d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f" exitCode=0 Feb 02 16:03:05 crc kubenswrapper[4869]: I0202 16:03:05.047672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" event={"ID":"4883a162-0123-4994-b91f-680ccb87e785","Type":"ContainerDied","Data":"d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f"} Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.209312 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.242603 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-6r9jq"] Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.250992 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-6r9jq"] Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.328318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") pod \"4883a162-0123-4994-b91f-680ccb87e785\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.328447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host" (OuterVolumeSpecName: "host") pod "4883a162-0123-4994-b91f-680ccb87e785" (UID: "4883a162-0123-4994-b91f-680ccb87e785"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.328887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") pod \"4883a162-0123-4994-b91f-680ccb87e785\" (UID: \"4883a162-0123-4994-b91f-680ccb87e785\") " Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.329320 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4883a162-0123-4994-b91f-680ccb87e785-host\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.334403 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68" (OuterVolumeSpecName: "kube-api-access-vjf68") pod "4883a162-0123-4994-b91f-680ccb87e785" (UID: "4883a162-0123-4994-b91f-680ccb87e785"). InnerVolumeSpecName "kube-api-access-vjf68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:03:06 crc kubenswrapper[4869]: I0202 16:03:06.431624 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjf68\" (UniqueName: \"kubernetes.io/projected/4883a162-0123-4994-b91f-680ccb87e785-kube-api-access-vjf68\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.072397 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9a1ba8d0e61d04f903c2e3c8eceb258c48f2a092d0744222de7809359d62f8" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.072462 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-6r9jq" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.441279 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/crc-debug-pztxt"] Feb 02 16:03:07 crc kubenswrapper[4869]: E0202 16:03:07.443827 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4883a162-0123-4994-b91f-680ccb87e785" containerName="container-00" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.443869 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4883a162-0123-4994-b91f-680ccb87e785" containerName="container-00" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.444182 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4883a162-0123-4994-b91f-680ccb87e785" containerName="container-00" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.445013 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.474509 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4883a162-0123-4994-b91f-680ccb87e785" path="/var/lib/kubelet/pods/4883a162-0123-4994-b91f-680ccb87e785/volumes" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.565162 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.565311 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.667874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.667988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.668119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.687477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"crc-debug-pztxt\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:07 crc kubenswrapper[4869]: I0202 16:03:07.764656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:08 crc kubenswrapper[4869]: I0202 16:03:08.083823 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-pztxt" event={"ID":"4fb26728-ed2e-4205-b7f5-ca7a98b8c910","Type":"ContainerStarted","Data":"77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7"} Feb 02 16:03:08 crc kubenswrapper[4869]: I0202 16:03:08.084301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-pztxt" event={"ID":"4fb26728-ed2e-4205-b7f5-ca7a98b8c910","Type":"ContainerStarted","Data":"5d263a760cd3c718d7a45fe4a9a6e935c14e4a22487dd64c8dac3deec49b788a"} Feb 02 16:03:08 crc kubenswrapper[4869]: I0202 16:03:08.106993 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9szhh/crc-debug-pztxt" podStartSLOduration=1.106968165 podStartE2EDuration="1.106968165s" podCreationTimestamp="2026-02-02 16:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 16:03:08.097162768 +0000 UTC m=+5389.741799538" watchObservedRunningTime="2026-02-02 16:03:08.106968165 +0000 UTC m=+5389.751604955" Feb 02 16:03:09 crc kubenswrapper[4869]: I0202 16:03:09.095345 4869 generic.go:334] "Generic (PLEG): container finished" podID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerID="77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7" exitCode=0 Feb 02 16:03:09 crc kubenswrapper[4869]: I0202 16:03:09.095436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-pztxt" event={"ID":"4fb26728-ed2e-4205-b7f5-ca7a98b8c910","Type":"ContainerDied","Data":"77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7"} Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.202726 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.321032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") pod \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.321115 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") pod \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\" (UID: \"4fb26728-ed2e-4205-b7f5-ca7a98b8c910\") " Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.322425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host" (OuterVolumeSpecName: "host") pod "4fb26728-ed2e-4205-b7f5-ca7a98b8c910" (UID: "4fb26728-ed2e-4205-b7f5-ca7a98b8c910"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.328782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk" (OuterVolumeSpecName: "kube-api-access-h66hk") pod "4fb26728-ed2e-4205-b7f5-ca7a98b8c910" (UID: "4fb26728-ed2e-4205-b7f5-ca7a98b8c910"). InnerVolumeSpecName "kube-api-access-h66hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.423815 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h66hk\" (UniqueName: \"kubernetes.io/projected/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-kube-api-access-h66hk\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.423852 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb26728-ed2e-4205-b7f5-ca7a98b8c910-host\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.849431 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-pztxt"] Feb 02 16:03:10 crc kubenswrapper[4869]: I0202 16:03:10.860421 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-pztxt"] Feb 02 16:03:11 crc kubenswrapper[4869]: I0202 16:03:11.118759 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d263a760cd3c718d7a45fe4a9a6e935c14e4a22487dd64c8dac3deec49b788a" Feb 02 16:03:11 crc kubenswrapper[4869]: I0202 16:03:11.118871 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-pztxt" Feb 02 16:03:11 crc kubenswrapper[4869]: I0202 16:03:11.477540 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" path="/var/lib/kubelet/pods/4fb26728-ed2e-4205-b7f5-ca7a98b8c910/volumes" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.012081 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9szhh/crc-debug-8rgfb"] Feb 02 16:03:12 crc kubenswrapper[4869]: E0202 16:03:12.013221 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerName="container-00" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.013438 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerName="container-00" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.014103 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb26728-ed2e-4205-b7f5-ca7a98b8c910" containerName="container-00" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.015446 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.066248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.066325 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.168154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.168248 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.168358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.196320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"crc-debug-8rgfb\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:12 crc kubenswrapper[4869]: I0202 16:03:12.339056 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.139886 4869 generic.go:334] "Generic (PLEG): container finished" podID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerID="ff2ba6291f48fd05032c2b7a4b4afad2ee04b00ae5888a83b68d20169b675016" exitCode=0 Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.140001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" event={"ID":"1c6d8b60-93c1-4b66-b0fb-bda7a3104357","Type":"ContainerDied","Data":"ff2ba6291f48fd05032c2b7a4b4afad2ee04b00ae5888a83b68d20169b675016"} Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.140234 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" event={"ID":"1c6d8b60-93c1-4b66-b0fb-bda7a3104357","Type":"ContainerStarted","Data":"a54b88a3f704e5e2d9ef6352bbab60b6335c86ce3541c781f3cb44c5119cbd9c"} Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.187809 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-8rgfb"] Feb 02 16:03:13 crc kubenswrapper[4869]: I0202 16:03:13.198032 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/crc-debug-8rgfb"] Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.259291 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.313360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") pod \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.313448 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") pod \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\" (UID: \"1c6d8b60-93c1-4b66-b0fb-bda7a3104357\") " Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.313587 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host" (OuterVolumeSpecName: "host") pod "1c6d8b60-93c1-4b66-b0fb-bda7a3104357" (UID: "1c6d8b60-93c1-4b66-b0fb-bda7a3104357"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.314024 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-host\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.323151 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq" (OuterVolumeSpecName: "kube-api-access-g7vxq") pod "1c6d8b60-93c1-4b66-b0fb-bda7a3104357" (UID: "1c6d8b60-93c1-4b66-b0fb-bda7a3104357"). InnerVolumeSpecName "kube-api-access-g7vxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:03:14 crc kubenswrapper[4869]: I0202 16:03:14.415719 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7vxq\" (UniqueName: \"kubernetes.io/projected/1c6d8b60-93c1-4b66-b0fb-bda7a3104357-kube-api-access-g7vxq\") on node \"crc\" DevicePath \"\"" Feb 02 16:03:15 crc kubenswrapper[4869]: I0202 16:03:15.172645 4869 scope.go:117] "RemoveContainer" containerID="ff2ba6291f48fd05032c2b7a4b4afad2ee04b00ae5888a83b68d20169b675016" Feb 02 16:03:15 crc kubenswrapper[4869]: I0202 16:03:15.172674 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/crc-debug-8rgfb" Feb 02 16:03:15 crc kubenswrapper[4869]: I0202 16:03:15.475045 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" path="/var/lib/kubelet/pods/1c6d8b60-93c1-4b66-b0fb-bda7a3104357/volumes" Feb 02 16:03:45 crc kubenswrapper[4869]: I0202 16:03:45.304745 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:03:45 crc kubenswrapper[4869]: I0202 16:03:45.305396 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:03:46 crc kubenswrapper[4869]: I0202 16:03:46.745651 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77794c6b74-fhtds_bbb63205-2a5c-4177-8b7f-2a141324ba49/barbican-api/0.log" Feb 02 16:03:46 crc kubenswrapper[4869]: I0202 16:03:46.989885 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77794c6b74-fhtds_bbb63205-2a5c-4177-8b7f-2a141324ba49/barbican-api-log/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.001530 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d7f6679db-zbdxv_9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3/barbican-keystone-listener/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.178584 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-675f9657dc-6qw7m_18463ac0-a171-4ae0-9201-8df3d574eb70/barbican-worker/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.242973 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-675f9657dc-6qw7m_18463ac0-a171-4ae0-9201-8df3d574eb70/barbican-worker-log/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.252388 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d7f6679db-zbdxv_9eddd0ab-42d6-4db0-b0db-eeb0259f4ec3/barbican-keystone-listener-log/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.467500 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-fmcw2_5ca847f3-12e0-43a7-af47-6739dc10627d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.523591 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/ceilometer-central-agent/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.669075 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/proxy-httpd/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.673253 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/ceilometer-notification-agent/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.688664 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58069dba-f825-4ee3-972d-85d122369b28/sg-core/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.871486 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-99d7r_89ab19c1-9bd6-4f8b-b295-aee078ee4b0d/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:47 crc kubenswrapper[4869]: I0202 16:03:47.880324 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-ntbnh_67cb4a99-39e2-4e00-88f5-748ad16cb874/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:48 crc kubenswrapper[4869]: I0202 16:03:48.579996 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ffb18e2a-67e6-4932-97fb-dd57b66f6c93/probe/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.145847 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d8f007a5-a428-44ff-8c6d-5de0d08beb7c/cinder-scheduler/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.189620 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1fbb1ee0-3403-49aa-9e5c-3926dd981751/cinder-api/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.308860 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1fbb1ee0-3403-49aa-9e5c-3926dd981751/cinder-api-log/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.484049 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d8f007a5-a428-44ff-8c6d-5de0d08beb7c/probe/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.702932 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37/probe/0.log" Feb 02 16:03:49 crc kubenswrapper[4869]: I0202 16:03:49.904592 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-txn47_19c443c4-baed-4a61-bc6d-bc8ba528e326/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.114577 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z97k7_c94bd387-2568-4bea-a5be-0ff99e224681/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.415961 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5kt5g_2d493264-07c6-4809-9a3e-809e60997896/init/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.581445 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5kt5g_2d493264-07c6-4809-9a3e-809e60997896/init/0.log" Feb 02 16:03:50 crc kubenswrapper[4869]: I0202 16:03:50.841711 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5kt5g_2d493264-07c6-4809-9a3e-809e60997896/dnsmasq-dns/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.064146 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6439a406-db54-421d-b5c7-5911b35cfda3/glance-log/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.082248 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6439a406-db54-421d-b5c7-5911b35cfda3/glance-httpd/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.333146 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e4f5a226-bdff-4182-971c-e3a22264a7d6/glance-httpd/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.592864 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e4f5a226-bdff-4182-971c-e3a22264a7d6/glance-log/0.log" Feb 02 16:03:51 crc kubenswrapper[4869]: I0202 16:03:51.788829 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bc7747c5b-j78w2_8714c728-0089-451b-8335-ab32ef8c39ac/horizon/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.008908 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zd67g_1cfd609a-5580-47a7-bb6d-afc564ca64d4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.218651 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rsvsc_04202cce-c3c1-483c-9d50-0fcf9a398094/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.290508 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6bc7747c5b-j78w2_8714c728-0089-451b-8335-ab32ef8c39ac/horizon-log/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.589196 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500741-9h6gs_d6019cb5-097c-4e32-b08f-dd117d4bcdf7/keystone-cron/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.788529 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500801-n7swm_35e8f12b-8b8b-4309-a57e-e46c357acc6d/keystone-cron/0.log" Feb 02 16:03:52 crc kubenswrapper[4869]: I0202 16:03:52.872658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ffb18e2a-67e6-4932-97fb-dd57b66f6c93/cinder-backup/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.030183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c78d1b99-1b30-416f-9afc-3dda8204e757/kube-state-metrics/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.296345 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hhzd9_83c45a4e-9fe0-4d8d-a74d-162a45a36d5e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.452013 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_68d3a7fe-1a89-4d45-9ffd-8057e313d3e9/manila-api-log/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.526763 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_68d3a7fe-1a89-4d45-9ffd-8057e313d3e9/manila-api/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.539253 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-575599577-dmndq_fc4c6770-5954-4777-8c4f-47397d045008/keystone-api/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.724645 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_52b1f1d7-270e-400d-b273-961b7142f38c/probe/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.796085 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_52b1f1d7-270e-400d-b273-961b7142f38c/manila-scheduler/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.816113 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0df9e23b-1681-42de-b9d6-87c4c518d082/manila-share/0.log" Feb 02 16:03:53 crc kubenswrapper[4869]: I0202 16:03:53.918699 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_0df9e23b-1681-42de-b9d6-87c4c518d082/probe/0.log" Feb 02 16:03:54 crc kubenswrapper[4869]: I0202 16:03:54.458705 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bbd64cf97-7t5h5_1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca/neutron-httpd/0.log" Feb 02 16:03:54 crc kubenswrapper[4869]: I0202 16:03:54.622985 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bbd64cf97-7t5h5_1d61c8c1-56dc-4fc4-8bbf-630c2fcff4ca/neutron-api/0.log" Feb 02 16:03:54 crc kubenswrapper[4869]: I0202 16:03:54.646666 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cj74g_cece8f41-7b97-43d1-b538-c09300006b15/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:55 crc kubenswrapper[4869]: I0202 16:03:55.356805 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_87abe16e-c4e3-4869-8f9e-6f9b46106c51/nova-cell0-conductor-conductor/0.log" Feb 02 16:03:55 crc kubenswrapper[4869]: I0202 16:03:55.649576 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6f2e77f7-6ccb-4992-8292-e69f277dc8f2/nova-api-log/0.log" Feb 02 16:03:55 crc kubenswrapper[4869]: I0202 16:03:55.892513 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7ed5d945-0024-455d-a2d4-c8724693b402/nova-cell1-conductor-conductor/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.202249 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_127a427f-66a5-4d07-ac48-aea0da95d425/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.205553 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6f2e77f7-6ccb-4992-8292-e69f277dc8f2/nova-api-api/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.390402 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-fzpnk_196ff3ae-e676-4d40-9de4-ea6ad23a1e5e/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.487936 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0c133ea7-0c2e-4338-a24b-319409d4e41a/nova-metadata-log/0.log" Feb 02 16:03:56 crc kubenswrapper[4869]: I0202 16:03:56.924858 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_46796adc-7f57-405f-bb4c-a2ccb79153f2/nova-scheduler-scheduler/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.091640 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4287f1a9-b523-48a9-a999-fc8f34b212a4/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.262827 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4287f1a9-b523-48a9-a999-fc8f34b212a4/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.309121 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4287f1a9-b523-48a9-a999-fc8f34b212a4/galera/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.501551 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0db20771-eb71-4272-9814-ab5bf0fff1fe/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.725957 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0db20771-eb71-4272-9814-ab5bf0fff1fe/mysql-bootstrap/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.743889 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0db20771-eb71-4272-9814-ab5bf0fff1fe/galera/0.log" Feb 02 16:03:57 crc kubenswrapper[4869]: I0202 16:03:57.934504 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9c3c55b0-c9be-4635-9562-347406f90dff/openstackclient/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.219018 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-f7z74_d51425d7-d30c-466d-b478-17a637e3ef9f/ovn-controller/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.426246 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sr5dv_2b612893-5e70-472a-a65f-0d0c66f82de3/openstack-network-exporter/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.685931 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovsdb-server-init/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.875943 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovsdb-server-init/0.log" Feb 02 16:03:58 crc kubenswrapper[4869]: I0202 16:03:58.906118 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovs-vswitchd/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.098810 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bd7dt_79eb9544-e5e9-455c-94ca-bb36fa6eb873/ovsdb-server/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.324317 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xjq2r_72dccf63-f84a-41bb-a601-d67db9557b64/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.391616 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0c133ea7-0c2e-4338-a24b-319409d4e41a/nova-metadata-metadata/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.562608 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f502e55d-56a7-4238-b2cc-46a4c2eb3945/openstack-network-exporter/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.624321 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f502e55d-56a7-4238-b2cc-46a4c2eb3945/ovn-northd/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.779028 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_208fe19b-f03b-4a68-b6f2-f9dc3783239e/openstack-network-exporter/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.805376 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_208fe19b-f03b-4a68-b6f2-f9dc3783239e/ovsdbserver-nb/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.826575 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_e8aaaaf3-a8f6-4aae-bb0f-d4e88ef0fc37/cinder-volume/0.log" Feb 02 16:03:59 crc kubenswrapper[4869]: I0202 16:03:59.982364 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1078d20a-9d7e-45ef-8be5-bade239489c4/memcached/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.004484 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9a1c388-0473-4284-9a2c-09e3d97858f2/ovsdbserver-sb/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.006029 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9a1c388-0473-4284-9a2c-09e3d97858f2/openstack-network-exporter/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.159935 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-dc5588748-k6f99_ec674145-26a6-4ce9-9e00-083bccdad283/placement-api/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.226561 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebc9110-3186-4c3f-877b-44061d345584/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.291636 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-dc5588748-k6f99_ec674145-26a6-4ce9-9e00-083bccdad283/placement-log/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.442758 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebc9110-3186-4c3f-877b-44061d345584/rabbitmq/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.451733 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cebc9110-3186-4c3f-877b-44061d345584/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.457636 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d228ac68-eb5f-494a-bf43-6cbca346ae24/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.674774 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d228ac68-eb5f-494a-bf43-6cbca346ae24/setup-container/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.719037 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_d228ac68-eb5f-494a-bf43-6cbca346ae24/rabbitmq/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.729699 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-t2m97_9ef6ee1c-f8bc-4060-8922-945b20187dfb/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.875599 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-d946d_09ba8528-6790-4df1-92c8-828f0ccd858e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.925720 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lnnll_4b9e0145-82e1-4dde-a4d2-d17e482d01b7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:00 crc kubenswrapper[4869]: I0202 16:04:00.959415 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-v2kr2_3d624d16-2868-4154-a700-18e0cebe9357/ssh-known-hosts-edpm-deployment/0.log" Feb 02 16:04:01 crc kubenswrapper[4869]: I0202 16:04:01.184559 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6a8f8bdb-9052-4ea2-9be8-1b61b5705e7d/test-operator-logs-container/0.log" Feb 02 16:04:01 crc kubenswrapper[4869]: I0202 16:04:01.326510 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-48vgr_34077009-4156-4523-9f51-24147190e39c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 16:04:01 crc kubenswrapper[4869]: I0202 16:04:01.594066 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_1ccbb21f-23d9-48be-a212-547e064326f6/tempest-tests-tempest-tests-runner/0.log" Feb 02 16:04:15 crc kubenswrapper[4869]: I0202 16:04:15.304120 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:04:15 crc kubenswrapper[4869]: I0202 16:04:15.305222 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:04:22 crc kubenswrapper[4869]: I0202 16:04:22.881454 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/util/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.054173 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/pull/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.071714 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/pull/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.095937 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/util/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.280054 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/extract/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.325584 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/pull/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.347203 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1f5c49be15fd40f839e0ad6075b971575fb4cc3051882700eb2772f89dcqwbn_e74d3905-6954-4c65-9cd2-d44a638ef83f/util/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.537743 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-fc589b45f-28mqn_f605f0c6-e023-433b-8e78-373b32387809/manager/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.689746 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-8f4c5cb64-pbxmj_5ea40597-21e0-4548-ab09-e381dac894ef/manager/0.log" Feb 02 16:04:23 crc kubenswrapper[4869]: I0202 16:04:23.865880 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5d77f4dbc9-qmt77_f07dc950-121d-4a91-8489-dfc187196775/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.087685 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-65dc6c8d9c-9ph7x_53467de5-c9d7-4aa0-973d-180c8cb84b27/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.190830 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-cpjjt_ad8b0f9a-67d7-4897-af4b-f344b3d1c502/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.594587 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-87bd9d46f-762xj_77902d6e-ef76-42b0-a40c-0b51f383f580/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.752845 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-b4jxj_c0779518-9e33-43e3-b373-263d74fbbd0f/manager/0.log" Feb 02 16:04:24 crc kubenswrapper[4869]: I0202 16:04:24.885293 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-64469b487f-m9czv_f27a3d01-fbc5-46d9-9c11-ef6c21ead605/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.040262 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7775d87d9d-l2b72_993dae41-359f-47f7-9a2a-38f7c97d49de/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.116256 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-hpnsb_3b0cf904-7af8-4e57-a664-7e594e557445/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.303700 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-85899c864d-4cnfc_fc6638c4-5467-48c9-b725-284cd08372f6/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.385352 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-576995988b-swhqr_c6218bbb-23fc-4ddd-8143-2ccf9f4cf2eb/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.546743 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5644b66645-2chmz_98a25bb6-75b1-49ad-8d7c-cc4e763470ec/manager/0.log" Feb 02 16:04:25 crc kubenswrapper[4869]: I0202 16:04:25.705072 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dmfpdl_bd94e783-b3ec-4d7e-b669-98255f029da6/manager/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.068656 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5d75b9d66c-jsstz_61702985-b65f-4603-9960-3a455bf05c9e/operator/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.336137 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-g2t6v_39ba26b8-85bb-43c8-80cb-c9523ba9cac7/registry-server/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.630661 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-28zx5_cf357940-5e8d-4111-86e6-1fafd5e670cd/manager/0.log" Feb 02 16:04:26 crc kubenswrapper[4869]: I0202 16:04:26.878014 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-6vnjh_ac2b0707-5906-40df-9457-06739f19df84/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.093084 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-djzsw_6719d674-1dac-4af1-859b-ea6a2186a20a/operator/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.243837 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7b89fdf75b-zdwh8_98a357a8-0e70-4f30-a41a-8dde25612a8a/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.513710 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-565849b54-fm2kj_7af79025-a32d-4e73-9559-5991093e986a/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.581531 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ntthk_06f5e083-c0ea-4ad0-9a07-50707d84be61/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.761731 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-586b95b788-9fsf5_2dfa14d3-9496-44cb-948b-e4065a9930c8/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.830034 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b89ddb58-h2kl2_7e9b35b2-f20d-4102-b541-63d2822c215d/manager/0.log" Feb 02 16:04:27 crc kubenswrapper[4869]: I0202 16:04:27.981612 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58566f7c4b-mnxtb_32aa6b38-d480-426c-a36c-4cf34c082e73/manager/0.log" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.303876 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.304410 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.304468 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.305313 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 16:04:45 crc kubenswrapper[4869]: I0202 16:04:45.305360 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89" gracePeriod=600 Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.029869 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-l692p_f89cdf2d-50e4-4089-8345-f11f7791826d/control-plane-machine-set-operator/0.log" Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031056 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89" exitCode=0 Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89"} Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031113 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79"} Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.031134 4869 scope.go:117] "RemoveContainer" containerID="3558d4becb7e91ddafcf881976d2e5862a941c6be1f0e7c360f4b22efbe53715" Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.201846 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whptb_0ade6e3e-6274-4469-af6f-39455fd721db/kube-rbac-proxy/0.log" Feb 02 16:04:46 crc kubenswrapper[4869]: I0202 16:04:46.215524 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whptb_0ade6e3e-6274-4469-af6f-39455fd721db/machine-api-operator/0.log" Feb 02 16:04:58 crc kubenswrapper[4869]: I0202 16:04:58.464232 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7j57w_d96c83c3-8f98-40c8-85f8-37cdf10eaeb7/cert-manager-controller/0.log" Feb 02 16:04:58 crc kubenswrapper[4869]: I0202 16:04:58.648394 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-498mc_92227558-4fbe-40b7-8a51-f9ba7043125a/cert-manager-cainjector/0.log" Feb 02 16:04:58 crc kubenswrapper[4869]: I0202 16:04:58.739979 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dfqjm_804bb5fc-4d8e-4f9f-892b-6d9af2943dbd/cert-manager-webhook/0.log" Feb 02 16:05:10 crc kubenswrapper[4869]: I0202 16:05:10.978499 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-sk72x_60ca7e15-9af2-4019-9481-39f8bc9e4ec7/nmstate-console-plugin/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.159298 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-87g86_3d92c75a-462e-4ff9-8373-8d91fb2624f4/nmstate-handler/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.224632 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-647lw_ec9ec105-2660-4787-89f3-5c0fe79e8e97/kube-rbac-proxy/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.299048 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-647lw_ec9ec105-2660-4787-89f3-5c0fe79e8e97/nmstate-metrics/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.363239 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bbvzg_f417537d-ce1d-461c-afec-09d3ec96c3b4/nmstate-operator/0.log" Feb 02 16:05:11 crc kubenswrapper[4869]: I0202 16:05:11.476072 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jf287_bd339f13-8405-47aa-b76a-2cef40d3ec11/nmstate-webhook/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.416011 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-45hcg_fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188/kube-rbac-proxy/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.601008 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-45hcg_fb7d0f1f-ea38-4756-b1fa-5fba1cc1a188/controller/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.671378 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.850939 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.869183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.899053 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:37 crc kubenswrapper[4869]: I0202 16:05:37.926816 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.130805 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.180928 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.185634 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.207236 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.333286 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-frr-files/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.371717 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-reloader/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.411178 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/controller/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.420578 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/cp-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.572955 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/frr-metrics/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.660318 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/kube-rbac-proxy/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.715422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/kube-rbac-proxy-frr/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.797331 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/reloader/0.log" Feb 02 16:05:38 crc kubenswrapper[4869]: I0202 16:05:38.940325 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-2v777_d389ca1e-a7e0-4a90-ae8a-f4d760b1ab1c/frr-k8s-webhook-server/0.log" Feb 02 16:05:39 crc kubenswrapper[4869]: I0202 16:05:39.177109 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b74bd8485-6rx7p_7a0708ec-3eb5-4515-adf0-e36c732da54e/manager/0.log" Feb 02 16:05:39 crc kubenswrapper[4869]: I0202 16:05:39.341284 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69b678c656-9prhr_322f75dd-f952-451d-b505-400b173b382c/webhook-server/0.log" Feb 02 16:05:39 crc kubenswrapper[4869]: I0202 16:05:39.489376 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qkkx4_131f6807-e412-436c-8271-86f09259ae74/kube-rbac-proxy/0.log" Feb 02 16:05:40 crc kubenswrapper[4869]: I0202 16:05:40.059508 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qkkx4_131f6807-e412-436c-8271-86f09259ae74/speaker/0.log" Feb 02 16:05:40 crc kubenswrapper[4869]: I0202 16:05:40.232648 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jrfvv_4c02ed66-22a0-4bd3-b10b-8dbf872aac9d/frr/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.309859 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/util/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.459578 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/util/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.468869 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/pull/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.498491 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/pull/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.675006 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/pull/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.701317 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/util/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.709393 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcm2fbx_861ed901-c46c-49d9-83ad-aeca9fd3f93b/extract/0.log" Feb 02 16:05:52 crc kubenswrapper[4869]: I0202 16:05:52.860542 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/util/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.021865 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/util/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.035769 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/pull/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.036252 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/pull/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.223957 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/util/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.225367 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/pull/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.282472 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qkxp4_264a08a0-30f5-4b76-af09-b97629a44d89/extract/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.414663 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-utilities/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.612857 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-content/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.617631 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-utilities/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.672825 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-content/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.777709 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-content/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.785968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/extract-utilities/0.log" Feb 02 16:05:53 crc kubenswrapper[4869]: I0202 16:05:53.999477 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-utilities/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.206707 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-utilities/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.251271 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-content/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.294375 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-content/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.442839 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-utilities/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.474966 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/extract-content/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.522990 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xjh6d_5e1c62bb-e047-4367-9cd0-572ac75fd6f6/registry-server/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.641239 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nbjts_ac6a4d49-eb04-4ee1-be26-63f67b0a092a/marketplace-operator/0.log" Feb 02 16:05:54 crc kubenswrapper[4869]: I0202 16:05:54.852773 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.109306 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.137990 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.159565 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.295145 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7q5gz_395af9bf-292b-41d1-a4ad-e4983331bc2d/registry-server/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.335543 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.336002 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.565344 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hh8gt_59d9a56c-d3b3-438c-8047-097cb18004b1/registry-server/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.590140 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.733530 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.761058 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-utilities/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.800845 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-content/0.log" Feb 02 16:05:55 crc kubenswrapper[4869]: I0202 16:05:55.977573 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-utilities/0.log" Feb 02 16:05:56 crc kubenswrapper[4869]: I0202 16:05:56.020449 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/extract-content/0.log" Feb 02 16:05:56 crc kubenswrapper[4869]: I0202 16:05:56.714212 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ndh2z_13714902-1992-4167-97b5-f3465ce5038f/registry-server/0.log" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.045389 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:32 crc kubenswrapper[4869]: E0202 16:06:32.046373 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerName="container-00" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.046390 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerName="container-00" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.046623 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6d8b60-93c1-4b66-b0fb-bda7a3104357" containerName="container-00" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.048267 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.060966 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.165251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.165395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.165425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267283 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.267870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.268182 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.289180 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"community-operators-rzx97\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:32 crc kubenswrapper[4869]: I0202 16:06:32.400986 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.110719 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.974469 4869 generic.go:334] "Generic (PLEG): container finished" podID="77029322-bdbc-422f-8f29-8294fb8c1921" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" exitCode=0 Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.974755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe"} Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.974786 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerStarted","Data":"7011d1d6eb35ac243cd911101dc03147167be19d4f9372fce27404d829dfb15d"} Feb 02 16:06:33 crc kubenswrapper[4869]: I0202 16:06:33.978051 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 16:06:35 crc kubenswrapper[4869]: I0202 16:06:35.994510 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerStarted","Data":"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3"} Feb 02 16:06:37 crc kubenswrapper[4869]: I0202 16:06:37.004115 4869 generic.go:334] "Generic (PLEG): container finished" podID="77029322-bdbc-422f-8f29-8294fb8c1921" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" exitCode=0 Feb 02 16:06:37 crc kubenswrapper[4869]: I0202 16:06:37.004215 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3"} Feb 02 16:06:39 crc kubenswrapper[4869]: I0202 16:06:39.034026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerStarted","Data":"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c"} Feb 02 16:06:39 crc kubenswrapper[4869]: I0202 16:06:39.056018 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rzx97" podStartSLOduration=3.271606434 podStartE2EDuration="7.055998134s" podCreationTimestamp="2026-02-02 16:06:32 +0000 UTC" firstStartedPulling="2026-02-02 16:06:33.977762483 +0000 UTC m=+5595.622399253" lastFinishedPulling="2026-02-02 16:06:37.762154193 +0000 UTC m=+5599.406790953" observedRunningTime="2026-02-02 16:06:39.051754521 +0000 UTC m=+5600.696391291" watchObservedRunningTime="2026-02-02 16:06:39.055998134 +0000 UTC m=+5600.700634904" Feb 02 16:06:42 crc kubenswrapper[4869]: I0202 16:06:42.401799 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:42 crc kubenswrapper[4869]: I0202 16:06:42.402227 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:42 crc kubenswrapper[4869]: I0202 16:06:42.472606 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:43 crc kubenswrapper[4869]: I0202 16:06:43.121063 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:43 crc kubenswrapper[4869]: I0202 16:06:43.172113 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.092653 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rzx97" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" containerID="cri-o://9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" gracePeriod=2 Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.304283 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.304350 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.564431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.686125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") pod \"77029322-bdbc-422f-8f29-8294fb8c1921\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.686190 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") pod \"77029322-bdbc-422f-8f29-8294fb8c1921\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.686268 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") pod \"77029322-bdbc-422f-8f29-8294fb8c1921\" (UID: \"77029322-bdbc-422f-8f29-8294fb8c1921\") " Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.699022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities" (OuterVolumeSpecName: "utilities") pod "77029322-bdbc-422f-8f29-8294fb8c1921" (UID: "77029322-bdbc-422f-8f29-8294fb8c1921"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.708504 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd" (OuterVolumeSpecName: "kube-api-access-dm5wd") pod "77029322-bdbc-422f-8f29-8294fb8c1921" (UID: "77029322-bdbc-422f-8f29-8294fb8c1921"). InnerVolumeSpecName "kube-api-access-dm5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.759114 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77029322-bdbc-422f-8f29-8294fb8c1921" (UID: "77029322-bdbc-422f-8f29-8294fb8c1921"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.788258 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.788308 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77029322-bdbc-422f-8f29-8294fb8c1921-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:06:45 crc kubenswrapper[4869]: I0202 16:06:45.788320 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm5wd\" (UniqueName: \"kubernetes.io/projected/77029322-bdbc-422f-8f29-8294fb8c1921-kube-api-access-dm5wd\") on node \"crc\" DevicePath \"\"" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106184 4869 generic.go:334] "Generic (PLEG): container finished" podID="77029322-bdbc-422f-8f29-8294fb8c1921" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" exitCode=0 Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c"} Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rzx97" event={"ID":"77029322-bdbc-422f-8f29-8294fb8c1921","Type":"ContainerDied","Data":"7011d1d6eb35ac243cd911101dc03147167be19d4f9372fce27404d829dfb15d"} Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106270 4869 scope.go:117] "RemoveContainer" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.106390 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rzx97" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.132923 4869 scope.go:117] "RemoveContainer" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.151439 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.157508 4869 scope.go:117] "RemoveContainer" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.161056 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rzx97"] Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.200774 4869 scope.go:117] "RemoveContainer" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" Feb 02 16:06:46 crc kubenswrapper[4869]: E0202 16:06:46.201427 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c\": container with ID starting with 9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c not found: ID does not exist" containerID="9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.201487 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c"} err="failed to get container status \"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c\": rpc error: code = NotFound desc = could not find container \"9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c\": container with ID starting with 9d34179545bae58863d61736cfbadac065c1e996d7180b01e2bdddaf7fe2a05c not found: ID does not exist" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.201520 4869 scope.go:117] "RemoveContainer" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" Feb 02 16:06:46 crc kubenswrapper[4869]: E0202 16:06:46.202068 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3\": container with ID starting with 448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3 not found: ID does not exist" containerID="448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.202179 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3"} err="failed to get container status \"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3\": rpc error: code = NotFound desc = could not find container \"448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3\": container with ID starting with 448f5aa4594261085290d99582b3b4d30a03e1a9bd202ee3f72ec9ebb067c5b3 not found: ID does not exist" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.202266 4869 scope.go:117] "RemoveContainer" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" Feb 02 16:06:46 crc kubenswrapper[4869]: E0202 16:06:46.202749 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe\": container with ID starting with bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe not found: ID does not exist" containerID="bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe" Feb 02 16:06:46 crc kubenswrapper[4869]: I0202 16:06:46.202784 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe"} err="failed to get container status \"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe\": rpc error: code = NotFound desc = could not find container \"bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe\": container with ID starting with bf50d370fd69a062eb76bebb8b979b52207e7900355fa9049b6241e97506c5fe not found: ID does not exist" Feb 02 16:06:47 crc kubenswrapper[4869]: I0202 16:06:47.472704 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" path="/var/lib/kubelet/pods/77029322-bdbc-422f-8f29-8294fb8c1921/volumes" Feb 02 16:07:15 crc kubenswrapper[4869]: I0202 16:07:15.304729 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:07:15 crc kubenswrapper[4869]: I0202 16:07:15.305510 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.304017 4869 patch_prober.go:28] interesting pod/machine-config-daemon-dql2j container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.304670 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.304723 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.305891 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79"} pod="openshift-machine-config-operator/machine-config-daemon-dql2j" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.305984 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerName="machine-config-daemon" containerID="cri-o://f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" gracePeriod=600 Feb 02 16:07:45 crc kubenswrapper[4869]: E0202 16:07:45.429779 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.707308 4869 generic.go:334] "Generic (PLEG): container finished" podID="a649255d-23ef-4070-9acc-2adb7d94bc21" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" exitCode=0 Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.707354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerDied","Data":"f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79"} Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.707411 4869 scope.go:117] "RemoveContainer" containerID="67a00da498baf4c52d8ec517c2f640db3de771b80196be5b7d7ee42267f2fa89" Feb 02 16:07:45 crc kubenswrapper[4869]: I0202 16:07:45.708016 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:07:45 crc kubenswrapper[4869]: E0202 16:07:45.709413 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:07:59 crc kubenswrapper[4869]: I0202 16:07:59.473087 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:07:59 crc kubenswrapper[4869]: E0202 16:07:59.473795 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:04 crc kubenswrapper[4869]: I0202 16:08:04.898738 4869 generic.go:334] "Generic (PLEG): container finished" podID="56e87714-4847-4c2f-81a9-357123c1e872" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" exitCode=0 Feb 02 16:08:04 crc kubenswrapper[4869]: I0202 16:08:04.898801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9szhh/must-gather-wq69k" event={"ID":"56e87714-4847-4c2f-81a9-357123c1e872","Type":"ContainerDied","Data":"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2"} Feb 02 16:08:04 crc kubenswrapper[4869]: I0202 16:08:04.900190 4869 scope.go:117] "RemoveContainer" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:05 crc kubenswrapper[4869]: I0202 16:08:05.762256 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9szhh_must-gather-wq69k_56e87714-4847-4c2f-81a9-357123c1e872/gather/0.log" Feb 02 16:08:13 crc kubenswrapper[4869]: I0202 16:08:13.462950 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:13 crc kubenswrapper[4869]: E0202 16:08:13.463829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.114449 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.114743 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9szhh/must-gather-wq69k" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" containerID="cri-o://db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" gracePeriod=2 Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.127666 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9szhh/must-gather-wq69k"] Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.597335 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9szhh_must-gather-wq69k_56e87714-4847-4c2f-81a9-357123c1e872/copy/0.log" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.598739 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.732519 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") pod \"56e87714-4847-4c2f-81a9-357123c1e872\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.732596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") pod \"56e87714-4847-4c2f-81a9-357123c1e872\" (UID: \"56e87714-4847-4c2f-81a9-357123c1e872\") " Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.753121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s" (OuterVolumeSpecName: "kube-api-access-2pk5s") pod "56e87714-4847-4c2f-81a9-357123c1e872" (UID: "56e87714-4847-4c2f-81a9-357123c1e872"). InnerVolumeSpecName "kube-api-access-2pk5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.834766 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pk5s\" (UniqueName: \"kubernetes.io/projected/56e87714-4847-4c2f-81a9-357123c1e872-kube-api-access-2pk5s\") on node \"crc\" DevicePath \"\"" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.903059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56e87714-4847-4c2f-81a9-357123c1e872" (UID: "56e87714-4847-4c2f-81a9-357123c1e872"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.936422 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56e87714-4847-4c2f-81a9-357123c1e872-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.995407 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9szhh_must-gather-wq69k_56e87714-4847-4c2f-81a9-357123c1e872/copy/0.log" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.995854 4869 generic.go:334] "Generic (PLEG): container finished" podID="56e87714-4847-4c2f-81a9-357123c1e872" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" exitCode=143 Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.995960 4869 scope.go:117] "RemoveContainer" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" Feb 02 16:08:14 crc kubenswrapper[4869]: I0202 16:08:14.996187 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9szhh/must-gather-wq69k" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.022882 4869 scope.go:117] "RemoveContainer" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.109505 4869 scope.go:117] "RemoveContainer" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" Feb 02 16:08:15 crc kubenswrapper[4869]: E0202 16:08:15.110098 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1\": container with ID starting with db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1 not found: ID does not exist" containerID="db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.110161 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1"} err="failed to get container status \"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1\": rpc error: code = NotFound desc = could not find container \"db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1\": container with ID starting with db58916d1bcfc21107201fea54ae01302b7370dca3d3b2095ca5b15f797c08f1 not found: ID does not exist" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.110198 4869 scope.go:117] "RemoveContainer" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:15 crc kubenswrapper[4869]: E0202 16:08:15.110522 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2\": container with ID starting with f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2 not found: ID does not exist" containerID="f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.110548 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2"} err="failed to get container status \"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2\": rpc error: code = NotFound desc = could not find container \"f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2\": container with ID starting with f771de653c981b731ce670ef0967f6346d907dea4af8ab7c2764907bd537b2f2 not found: ID does not exist" Feb 02 16:08:15 crc kubenswrapper[4869]: I0202 16:08:15.546048 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e87714-4847-4c2f-81a9-357123c1e872" path="/var/lib/kubelet/pods/56e87714-4847-4c2f-81a9-357123c1e872/volumes" Feb 02 16:08:25 crc kubenswrapper[4869]: I0202 16:08:25.463164 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:25 crc kubenswrapper[4869]: E0202 16:08:25.464206 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:32 crc kubenswrapper[4869]: I0202 16:08:32.518569 4869 scope.go:117] "RemoveContainer" containerID="d472ad4cfffb6ce34fcab232f456faf2bc5c139884bc19851d79c2adff55a49f" Feb 02 16:08:40 crc kubenswrapper[4869]: I0202 16:08:40.462327 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:40 crc kubenswrapper[4869]: E0202 16:08:40.464155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:08:55 crc kubenswrapper[4869]: I0202 16:08:55.462881 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:08:55 crc kubenswrapper[4869]: E0202 16:08:55.463715 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:09 crc kubenswrapper[4869]: I0202 16:09:09.468671 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:09 crc kubenswrapper[4869]: E0202 16:09:09.469733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:21 crc kubenswrapper[4869]: I0202 16:09:21.462583 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:21 crc kubenswrapper[4869]: E0202 16:09:21.463483 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:32 crc kubenswrapper[4869]: I0202 16:09:32.462220 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:32 crc kubenswrapper[4869]: E0202 16:09:32.464789 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:09:32 crc kubenswrapper[4869]: I0202 16:09:32.582666 4869 scope.go:117] "RemoveContainer" containerID="77f7b5d294b60bfbbe355f8b5327d53d20b5718b7bb4f2b6f233a898b734eaf7" Feb 02 16:09:46 crc kubenswrapper[4869]: I0202 16:09:46.462866 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:09:46 crc kubenswrapper[4869]: E0202 16:09:46.464159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:01 crc kubenswrapper[4869]: I0202 16:10:01.463361 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:01 crc kubenswrapper[4869]: E0202 16:10:01.464241 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:14 crc kubenswrapper[4869]: I0202 16:10:14.462954 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:14 crc kubenswrapper[4869]: E0202 16:10:14.464056 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:26 crc kubenswrapper[4869]: I0202 16:10:26.463443 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:26 crc kubenswrapper[4869]: E0202 16:10:26.465739 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.804090 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805128 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805147 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805171 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="gather" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805180 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="gather" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805201 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-utilities" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805211 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-utilities" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805232 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805283 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" Feb 02 16:10:29 crc kubenswrapper[4869]: E0202 16:10:29.805306 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-content" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="extract-content" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805809 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="77029322-bdbc-422f-8f29-8294fb8c1921" containerName="registry-server" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805837 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="gather" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.805853 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e87714-4847-4c2f-81a9-357123c1e872" containerName="copy" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.808080 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.822231 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.924103 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.924169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:29 crc kubenswrapper[4869]: I0202 16:10:29.924364 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.027570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.027642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.027762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.028279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.028398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.054221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"redhat-operators-8qk85\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.142671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:30 crc kubenswrapper[4869]: I0202 16:10:30.653257 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:31 crc kubenswrapper[4869]: I0202 16:10:31.276077 4869 generic.go:334] "Generic (PLEG): container finished" podID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" exitCode=0 Feb 02 16:10:31 crc kubenswrapper[4869]: I0202 16:10:31.276191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673"} Feb 02 16:10:31 crc kubenswrapper[4869]: I0202 16:10:31.276324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerStarted","Data":"52ae42d34a9f366250b3a49bfcf92a731d2e83c5ababadba7f489e0906888585"} Feb 02 16:10:33 crc kubenswrapper[4869]: I0202 16:10:33.293620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerStarted","Data":"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818"} Feb 02 16:10:36 crc kubenswrapper[4869]: I0202 16:10:36.326388 4869 generic.go:334] "Generic (PLEG): container finished" podID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" exitCode=0 Feb 02 16:10:36 crc kubenswrapper[4869]: I0202 16:10:36.326459 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818"} Feb 02 16:10:37 crc kubenswrapper[4869]: I0202 16:10:37.344003 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerStarted","Data":"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae"} Feb 02 16:10:37 crc kubenswrapper[4869]: I0202 16:10:37.377115 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8qk85" podStartSLOduration=2.770197643 podStartE2EDuration="8.377085565s" podCreationTimestamp="2026-02-02 16:10:29 +0000 UTC" firstStartedPulling="2026-02-02 16:10:31.278120065 +0000 UTC m=+5832.922756835" lastFinishedPulling="2026-02-02 16:10:36.885007987 +0000 UTC m=+5838.529644757" observedRunningTime="2026-02-02 16:10:37.373200882 +0000 UTC m=+5839.017837662" watchObservedRunningTime="2026-02-02 16:10:37.377085565 +0000 UTC m=+5839.021722345" Feb 02 16:10:37 crc kubenswrapper[4869]: I0202 16:10:37.462983 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:37 crc kubenswrapper[4869]: E0202 16:10:37.463313 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:40 crc kubenswrapper[4869]: I0202 16:10:40.143727 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:40 crc kubenswrapper[4869]: I0202 16:10:40.144425 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:41 crc kubenswrapper[4869]: I0202 16:10:41.189602 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8qk85" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" probeResult="failure" output=< Feb 02 16:10:41 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Feb 02 16:10:41 crc kubenswrapper[4869]: > Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.212754 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.291498 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.457279 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:50 crc kubenswrapper[4869]: I0202 16:10:50.462674 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:10:50 crc kubenswrapper[4869]: E0202 16:10:50.462987 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:10:51 crc kubenswrapper[4869]: I0202 16:10:51.468287 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8qk85" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" containerID="cri-o://c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" gracePeriod=2 Feb 02 16:10:51 crc kubenswrapper[4869]: I0202 16:10:51.962729 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.113942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") pod \"21cffe4b-d876-432a-9dd0-8e04c59313fa\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.114098 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") pod \"21cffe4b-d876-432a-9dd0-8e04c59313fa\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.114291 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") pod \"21cffe4b-d876-432a-9dd0-8e04c59313fa\" (UID: \"21cffe4b-d876-432a-9dd0-8e04c59313fa\") " Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.115069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities" (OuterVolumeSpecName: "utilities") pod "21cffe4b-d876-432a-9dd0-8e04c59313fa" (UID: "21cffe4b-d876-432a-9dd0-8e04c59313fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.121463 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv" (OuterVolumeSpecName: "kube-api-access-wrngv") pod "21cffe4b-d876-432a-9dd0-8e04c59313fa" (UID: "21cffe4b-d876-432a-9dd0-8e04c59313fa"). InnerVolumeSpecName "kube-api-access-wrngv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.217193 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrngv\" (UniqueName: \"kubernetes.io/projected/21cffe4b-d876-432a-9dd0-8e04c59313fa-kube-api-access-wrngv\") on node \"crc\" DevicePath \"\"" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.217228 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.234516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21cffe4b-d876-432a-9dd0-8e04c59313fa" (UID: "21cffe4b-d876-432a-9dd0-8e04c59313fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.319449 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21cffe4b-d876-432a-9dd0-8e04c59313fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476780 4869 generic.go:334] "Generic (PLEG): container finished" podID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" exitCode=0 Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae"} Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476865 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qk85" event={"ID":"21cffe4b-d876-432a-9dd0-8e04c59313fa","Type":"ContainerDied","Data":"52ae42d34a9f366250b3a49bfcf92a731d2e83c5ababadba7f489e0906888585"} Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476888 4869 scope.go:117] "RemoveContainer" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.476903 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qk85" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.497973 4869 scope.go:117] "RemoveContainer" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.524538 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.531259 4869 scope.go:117] "RemoveContainer" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.539542 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8qk85"] Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.571387 4869 scope.go:117] "RemoveContainer" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" Feb 02 16:10:52 crc kubenswrapper[4869]: E0202 16:10:52.571960 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae\": container with ID starting with c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae not found: ID does not exist" containerID="c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572011 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae"} err="failed to get container status \"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae\": rpc error: code = NotFound desc = could not find container \"c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae\": container with ID starting with c23eb5c0a4421910a6da2eceb91a06ba3c7480a6924efc634e4045f6bf4118ae not found: ID does not exist" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572042 4869 scope.go:117] "RemoveContainer" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" Feb 02 16:10:52 crc kubenswrapper[4869]: E0202 16:10:52.572536 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818\": container with ID starting with 578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818 not found: ID does not exist" containerID="578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572597 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818"} err="failed to get container status \"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818\": rpc error: code = NotFound desc = could not find container \"578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818\": container with ID starting with 578aa645737b1037035e1b451e9ebb05bfa592c7f3e937b0f6aa3dfe6dcb7818 not found: ID does not exist" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.572638 4869 scope.go:117] "RemoveContainer" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" Feb 02 16:10:52 crc kubenswrapper[4869]: E0202 16:10:52.572985 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673\": container with ID starting with 232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673 not found: ID does not exist" containerID="232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673" Feb 02 16:10:52 crc kubenswrapper[4869]: I0202 16:10:52.573027 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673"} err="failed to get container status \"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673\": rpc error: code = NotFound desc = could not find container \"232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673\": container with ID starting with 232855ab6c4f4a2b802af174d512f2fef525c91529194e259b95d227f7000673 not found: ID does not exist" Feb 02 16:10:53 crc kubenswrapper[4869]: I0202 16:10:53.476844 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" path="/var/lib/kubelet/pods/21cffe4b-d876-432a-9dd0-8e04c59313fa/volumes" Feb 02 16:11:01 crc kubenswrapper[4869]: I0202 16:11:01.462809 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:01 crc kubenswrapper[4869]: E0202 16:11:01.463544 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:12 crc kubenswrapper[4869]: I0202 16:11:12.463291 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:12 crc kubenswrapper[4869]: E0202 16:11:12.464614 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:27 crc kubenswrapper[4869]: I0202 16:11:27.463668 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:27 crc kubenswrapper[4869]: E0202 16:11:27.466686 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:41 crc kubenswrapper[4869]: I0202 16:11:41.462588 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:41 crc kubenswrapper[4869]: E0202 16:11:41.467331 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:11:53 crc kubenswrapper[4869]: I0202 16:11:53.463717 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:11:53 crc kubenswrapper[4869]: E0202 16:11:53.464558 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:05 crc kubenswrapper[4869]: I0202 16:12:05.462560 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:05 crc kubenswrapper[4869]: E0202 16:12:05.463377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.248244 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:11 crc kubenswrapper[4869]: E0202 16:12:11.249165 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-utilities" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249179 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-utilities" Feb 02 16:12:11 crc kubenswrapper[4869]: E0202 16:12:11.249214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249220 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" Feb 02 16:12:11 crc kubenswrapper[4869]: E0202 16:12:11.249235 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-content" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249241 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="extract-content" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.249431 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cffe4b-d876-432a-9dd0-8e04c59313fa" containerName="registry-server" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.250749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.261254 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.351376 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.351427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.351571 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.453211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.453262 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.453320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.454274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.454319 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.486990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"redhat-marketplace-4tcmf\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:11 crc kubenswrapper[4869]: I0202 16:12:11.570115 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:12 crc kubenswrapper[4869]: I0202 16:12:12.103439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:12 crc kubenswrapper[4869]: I0202 16:12:12.206869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerStarted","Data":"787a9782d45630680398671ceee03bba74f3c66b11586d0b0ab523efcf431b8c"} Feb 02 16:12:13 crc kubenswrapper[4869]: I0202 16:12:13.226207 4869 generic.go:334] "Generic (PLEG): container finished" podID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" exitCode=0 Feb 02 16:12:13 crc kubenswrapper[4869]: I0202 16:12:13.226522 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694"} Feb 02 16:12:13 crc kubenswrapper[4869]: I0202 16:12:13.231828 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 16:12:14 crc kubenswrapper[4869]: I0202 16:12:14.236051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerStarted","Data":"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae"} Feb 02 16:12:15 crc kubenswrapper[4869]: I0202 16:12:15.245630 4869 generic.go:334] "Generic (PLEG): container finished" podID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" exitCode=0 Feb 02 16:12:15 crc kubenswrapper[4869]: I0202 16:12:15.245727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae"} Feb 02 16:12:16 crc kubenswrapper[4869]: I0202 16:12:16.254859 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerStarted","Data":"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6"} Feb 02 16:12:16 crc kubenswrapper[4869]: I0202 16:12:16.279020 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4tcmf" podStartSLOduration=2.871680299 podStartE2EDuration="5.279003186s" podCreationTimestamp="2026-02-02 16:12:11 +0000 UTC" firstStartedPulling="2026-02-02 16:12:13.231421068 +0000 UTC m=+5934.876057838" lastFinishedPulling="2026-02-02 16:12:15.638743955 +0000 UTC m=+5937.283380725" observedRunningTime="2026-02-02 16:12:16.275084712 +0000 UTC m=+5937.919721482" watchObservedRunningTime="2026-02-02 16:12:16.279003186 +0000 UTC m=+5937.923639956" Feb 02 16:12:17 crc kubenswrapper[4869]: I0202 16:12:17.463136 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:17 crc kubenswrapper[4869]: E0202 16:12:17.463869 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:21 crc kubenswrapper[4869]: I0202 16:12:21.572307 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:21 crc kubenswrapper[4869]: I0202 16:12:21.572674 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:21 crc kubenswrapper[4869]: I0202 16:12:21.620528 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:22 crc kubenswrapper[4869]: I0202 16:12:22.361619 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:22 crc kubenswrapper[4869]: I0202 16:12:22.410961 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.321334 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4tcmf" podUID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerName="registry-server" containerID="cri-o://ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" gracePeriod=2 Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.808351 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.953082 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") pod \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.963326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") pod \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.963385 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") pod \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\" (UID: \"836c110e-4a7e-4cb2-b896-3c8adc5bff81\") " Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.964804 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities" (OuterVolumeSpecName: "utilities") pod "836c110e-4a7e-4cb2-b896-3c8adc5bff81" (UID: "836c110e-4a7e-4cb2-b896-3c8adc5bff81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:12:24 crc kubenswrapper[4869]: I0202 16:12:24.969204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv" (OuterVolumeSpecName: "kube-api-access-rmbzv") pod "836c110e-4a7e-4cb2-b896-3c8adc5bff81" (UID: "836c110e-4a7e-4cb2-b896-3c8adc5bff81"). InnerVolumeSpecName "kube-api-access-rmbzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.064767 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.064795 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmbzv\" (UniqueName: \"kubernetes.io/projected/836c110e-4a7e-4cb2-b896-3c8adc5bff81-kube-api-access-rmbzv\") on node \"crc\" DevicePath \"\"" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.271132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "836c110e-4a7e-4cb2-b896-3c8adc5bff81" (UID: "836c110e-4a7e-4cb2-b896-3c8adc5bff81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.330816 4869 generic.go:334] "Generic (PLEG): container finished" podID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" exitCode=0 Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.330876 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tcmf" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.331642 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6"} Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.331771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tcmf" event={"ID":"836c110e-4a7e-4cb2-b896-3c8adc5bff81","Type":"ContainerDied","Data":"787a9782d45630680398671ceee03bba74f3c66b11586d0b0ab523efcf431b8c"} Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.331852 4869 scope.go:117] "RemoveContainer" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.358598 4869 scope.go:117] "RemoveContainer" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.369812 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/836c110e-4a7e-4cb2-b896-3c8adc5bff81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.381870 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.390106 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tcmf"] Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.396555 4869 scope.go:117] "RemoveContainer" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.444444 4869 scope.go:117] "RemoveContainer" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" Feb 02 16:12:25 crc kubenswrapper[4869]: E0202 16:12:25.444969 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6\": container with ID starting with ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6 not found: ID does not exist" containerID="ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445013 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6"} err="failed to get container status \"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6\": rpc error: code = NotFound desc = could not find container \"ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6\": container with ID starting with ac583b670cb86a90f15fe1194898acb4e454a1d17e539727ffcfdae4c7b08bb6 not found: ID does not exist" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445040 4869 scope.go:117] "RemoveContainer" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" Feb 02 16:12:25 crc kubenswrapper[4869]: E0202 16:12:25.445457 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae\": container with ID starting with c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae not found: ID does not exist" containerID="c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445491 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae"} err="failed to get container status \"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae\": rpc error: code = NotFound desc = could not find container \"c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae\": container with ID starting with c353914f8b51e8ba4b89b54ad35e11805e20887defad6cadee49f20a5a918bae not found: ID does not exist" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445513 4869 scope.go:117] "RemoveContainer" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" Feb 02 16:12:25 crc kubenswrapper[4869]: E0202 16:12:25.445853 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694\": container with ID starting with a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694 not found: ID does not exist" containerID="a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.445875 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694"} err="failed to get container status \"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694\": rpc error: code = NotFound desc = could not find container \"a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694\": container with ID starting with a7c1b918319a15ca84a26561fb9c6829a33e35f3b4b34c0716a6605b45ea9694 not found: ID does not exist" Feb 02 16:12:25 crc kubenswrapper[4869]: I0202 16:12:25.476609 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836c110e-4a7e-4cb2-b896-3c8adc5bff81" path="/var/lib/kubelet/pods/836c110e-4a7e-4cb2-b896-3c8adc5bff81/volumes" Feb 02 16:12:29 crc kubenswrapper[4869]: I0202 16:12:29.469483 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:29 crc kubenswrapper[4869]: E0202 16:12:29.470383 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:40 crc kubenswrapper[4869]: I0202 16:12:40.464432 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:40 crc kubenswrapper[4869]: E0202 16:12:40.465267 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dql2j_openshift-machine-config-operator(a649255d-23ef-4070-9acc-2adb7d94bc21)\"" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" podUID="a649255d-23ef-4070-9acc-2adb7d94bc21" Feb 02 16:12:52 crc kubenswrapper[4869]: I0202 16:12:52.463504 4869 scope.go:117] "RemoveContainer" containerID="f2a1b22128df9b70330e6afbe1a474ee61d063b19deb9e9f5f3181c58c3c9e79" Feb 02 16:12:53 crc kubenswrapper[4869]: I0202 16:12:53.582209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dql2j" event={"ID":"a649255d-23ef-4070-9acc-2adb7d94bc21","Type":"ContainerStarted","Data":"20477e96901339ca056ebc58e8723c143f29eddd88b9c8140ac0e9687c1639e3"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140146601024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140146602017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140132310016473 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140132310015443 5ustar corecore